Bimodal bilinguals, fluent in a signed and a spoken language, provide unique insight into the nature of syntactic integration and language control. We investigated whether bimodal bilinguals who are conversing with English monolinguals produce American Sign Language (ASL) grammatical facial expressions to accompany parallel syntactic structures in spoken English. In ASL, raised eyebrows mark conditionals, and furrowed eyebrows mark wh-questions; the grammatical brow movement is synchronized with the manual onset of the clause. Bimodal bilinguals produced more ASL-appropriate facial expressions than did nonsigners and synchronized their expressions with the onset of the corresponding English clauses. This result provides evidence for a dual-language architecture in which grammatical information can be integrated up to the level of phonological implementation. Overall, participants produced more raised brows than furrowed brows, which can convey negative affect. Bimodal bilinguals suppressed but did not completely inhibit ASL facial grammar when it conflicted with conventional facial gestures. We conclude that morphosyntactic elements from two languages can be articulated simultaneously and that complete inhibition of the nonselected language is difficult.
The effects of knowledge of sign language on co-speech gesture were investigated by comparing the spontaneous gestures of bimodal bilinguals (native users of American Sign Language and English; n = 13) and non-signing native English speakers (n = 12). Each participant viewed and re-told the Canary Row cartoon to a non-signer whom they did not know. Nine of the thirteen bimodal bilinguals produced at least one ASL sign, which we hypothesise resulted from a failure to inhibit ASL. Compared with non-signers, bimodal bilinguals produced more iconic gestures, fewer beat gestures, and more gestures from a character viewpoint. The gestures of bimodal bilinguals also exhibited a greater variety of handshape types and more frequent use of unmarked handshapes. We hypothesise that these semantic and form differences arise from an interaction between the ASL language production system and the co-speech gesture system.
Traditionally, neuronal studies of multisensory processing proceeded by first identifying neurons that were overtly multisensory (e.g., bimodal, trimodal) and then testing them. In contrast, the present study examined, without precondition, neurons in an extrastriate visual area of the cat for their responses to separate (visual, auditory) and combined-modality (visual and auditory) stimulation. As expected, traditional bimodal forms of multisensory neurons were identified. In addition, however, many neurons that were activated only by visual stimulation (i.e., unimodal) had that response modulated by the presence of an auditory stimulus. Some unimodal neurons showed multisensory responses that were statistically different from their visual response. Other unimodal neurons had subtle multisensory effects that were detectable only at the population level. Most surprisingly, these non-bimodal neurons generated more than twice the multisensory signal in the PLLS than did the bimodal neurons. These results expand the range of multisensory convergence patterns beyond that of the bimodal neuron. However, rather than characterize a separate class of multisensory neurons, unimodal multisensory neurons may actually represent an intermediary form of multisensory convergence that exists along the functional continuum between unisensory neurons...
Bimodality of gene expression, as a mechanism contributing to phenotypic diversity, enhances the survival of cells in a fluctuating environment. To date, the bimodal response of a gene regulatory system has been attributed to the cooperativity of transcription factor binding or to feedback loops. It has remained unclear whether noncooperative binding of transcription factors can give rise to bimodality in an open-loop system. We study a theoretical model of gene expression in a two-step cascade (a deterministically monostable system) in which the regulatory gene produces transcription factors that have a nonlinear effect on the activity of the target gene. We show that a unimodal distribution of transcription factors over the cell population can generate a bimodal steady-state output without cooperative transcription factor binding. We introduce a simple method of geometric construction that allows one to predict the onset of bimodality. The construction only involves the parameters of bursting of the regulatory gene and the dose–response curve of the target gene. Using this method, we show that the gene expression may switch between unimodal and bimodal as the concentration of inducers or corepressors is varied. These findings may explain the experimentally observed bimodal response of cascades consisting of a fluorescent protein reporter controlled by the tetracycline repressor. The geometric construction provides a useful tool for designing experiments and for interpretation of their results. Our findings may have important implications for understanding the strategies adopted by cell populations to survive in changing environments.
In recent times, stochastic treatments of gene regulatory processes have appeared in the literature in which a cell exposed to a signaling molecule in its environment triggers the synthesis of a specific protein through a network of intracellular reactions. The stochastic nature of this process leads to a distribution of protein levels in a population of cells as determined by a Fokker-Planck equation. Often instability occurs as a consequence of two (stable) steady state protein levels, one at the low end representing the “off” state, and the other at the high end representing the “on” state for a given concentration of the signaling molecule within a suitable range. A consequence of such bistability has been the appearance of bimodal distributions indicating two different populations, one in the “off” state and the other in the “on” state. The bimodal distribution can come about from stochastic analysis of a single cell. However, the concerted action of the population altering the extracellular concentration in the environment of individual cells and hence their behavior can only be accomplished by an appropriate population balance model which accounts for the reciprocal effects of interaction between the population and its environment. In this study...
Cochlear implantees have considerably good speech understanding abilities in quiet surroundings. But, ambient noise poses significant difficulties in understanding speech for these individuals. Bimodal stimulation is still not used by many Indian implantees in spite of reports that bimodal stimulation is beneficial for speech understanding in noise as compared to cochlear implant alone and also prevents auditory deprivation in the un-implanted ear. The aim of the study is to evaluate the benefits of bimodal stimulation in children in an Indian cochlear implant clinic. A group of 14 children who have been using cochlear implants served as subjects in this study. They were fitted with advanced digital hearing aids in their un-implanted ears to provide bimodal stimulation. Results revealed that bimodal stimulation did not bring greater change in speech scores in quiet surroundings but have shown a noticeable improvement in noisy ambience. Hence the present study suggests that bimodal stimulation would benefit children with cochlear implants especially in adverse listening conditions.
The addition of low-passed (LP) speech or even a tone following the fundamental frequency (F0) of speech has been shown to benefit speech recognition for cochlear implant (CI) users with residual acoustic hearing. The mechanisms underlying this benefit are still unclear. In this study, eight bimodal subjects (CI users with acoustic hearing in the non-implanted ear) and eight simulated bimodal subjects (using vocoded and LP speech) were tested on vowel and consonant recognition to determine the relative contributions of acoustic and phonetic cues, including F0, to the bimodal benefit. Several listening conditions were tested (CI/Vocoder, LP, TF0-env, CI/Vocoder + LP, CI/Vocoder + TF0-env). Compared with CI/Vocoder performance, LP significantly enhanced both consonant and vowel perception, whereas a tone following the F0 contour of target speech and modulated with an amplitude envelope of the maximum frequency of the F0 contour (TF0-env) enhanced only consonant perception. Information transfer analysis revealed a dual mechanism in the bimodal benefit: The tone representing F0 provided voicing and manner information, whereas LP provided additional manner, place, and vowel formant information. The data in actual bimodal subjects also showed that the degree of the bimodal benefit depended on the cutoff and slope of residual acoustic hearing.
Bimodal atomic force microscopy can provide high-resolution images of polymers. In the bimodal operation mode, two eigenmodes of the cantilever are driven simultaneously. When examining polymers, an effective mechanical contact is often required between the tip and the sample to obtain compositional contrast, so particular emphasis was placed on the repulsive regime of dynamic force microscopy. We thus investigated bimodal imaging on a polystyrene-block-polybutadiene diblock copolymer surface and on polystyrene. The attractive operation regime was only stable when the amplitude of the second eigenmode was kept small compared to the amplitude of the fundamental mode. To clarify the influence of the higher eigenmode oscillation on the image quality, the amplitude ratio of both modes was systematically varied. Fourier analysis of the time series recorded during imaging showed frequency mixing. However, these spurious signals were at least two orders of magnitude smaller than the first two fundamental eigenmodes. Thus, repulsive bimodal imaging of polymer surfaces yields a good signal quality for amplitude ratios smaller than A
02 = 10:1 without affecting the topography feedback.
Streptococcus mutans regulates genetic competence through a complex network that receives inputs from a number of environmental stimuli, including two signaling peptides designated as CSP and XIP. The response of the downstream competence genes to these inputs shows evidence of stochasticity and bistability and has been difficult to interpret. We have used microfluidic, single-cell methods to study how combinations of extracellular signals shape the response of comX, an alternative sigma factor governing expression of the late competence genes. We find that the composition of the medium determines which extracellular signal (XIP or CSP) can elicit a response from comX and whether that response is unimodal or bimodal across a population of cells. In a chemically defined medium, exogenous CSP does not induce comX, whereas exogenous XIP elicits a comX response from all cells. In complex medium, exogenous XIP does not induce comX, whereas CSP elicits a bimodal comX response from the population. Interestingly, bimodal behavior required an intact copy of comS, which encodes the precursor of XIP. The comS-dependent capability for both unimodal and bimodal response suggests that a constituent – most likely peptides – of complex medium interacts with a positive feedback loop in the competence regulatory network.
A significant fraction of newly implanted cochlear implant recipients use a hearing aid in their non-implanted ear. SCORE bimodal is a sound processing strategy developed for this configuration, aimed at normalising loudness perception and improving binaural loudness balance. Speech perception performance in quiet and noise and sound localisation ability of six bimodal listeners were measured with and without application of SCORE. Speech perception in quiet was measured either with only acoustic, only electric, or bimodal stimulation, at soft and normal conversational levels. For speech in quiet there was a significant improvement with application of SCORE. Speech perception in noise was measured for either steady-state noise, fluctuating noise, or a competing talker, at conversational levels with bimodal stimulation. For speech in noise there was no significant effect of application of SCORE. Modelling of interaural loudness differences in a long-term-average-speech-spectrum-weighted click train indicated that left-right discrimination of sound sources can improve with application of SCORE. As SCORE was found to leave speech perception unaffected or to improve it, it seems suitable for implementation in clinical devices.
The frequency-lag hypothesis proposes that bilinguals have slowed lexical retrieval relative to monolinguals and in their nondominant language relative to their dominant language, particularly for low-frequency words. These effects arise because bilinguals divide their language use between 2 languages and use their nondominant language less frequently. We conducted a picture-naming study with hearing American Sign Language (ASL)–English bilinguals (bimodal bilinguals), deaf signers, and English-speaking monolinguals. As predicted by the frequency-lag hypothesis, bimodal bilinguals were slower, less accurate, and exhibited a larger frequency effect when naming pictures in ASL as compared with English (their dominant language) and as compared with deaf signers. For English there was no difference in naming latencies, error rates, or frequency effects for bimodal bilinguals as compared with monolinguals. Neither age of ASL acquisition nor interpreting experience affected the results; picture-naming accuracy and frequency effects were equivalent for deaf signers and English monolinguals. Larger frequency effects in ASL relative to English for bimodal bilinguals suggests that they are affected by a frequency lag in ASL. The absence of a lag for English could reflect the use of mouthing and/or code-blending...
One of the key goals in atomic force microscopy (AFM) imaging is to enhance material property contrast with high resolution. Bimodal AFM, where two eigenmodes are simultaneously excited, confers significant advantages over conventional single-frequency tapping mode AFM due to its ability to provide contrast between regions with different material properties under gentle imaging conditions. Bimodal AFM traditionally uses the first two eigenmodes of the AFM cantilever. In this work, the authors explore the use of higher eigenmodes in bimodal AFM (e.g., exciting the first and fourth eigenmodes). It is found that such operation leads to interesting contrast reversals compared to traditional bimodal AFM. A series of experiments and numerical simulations shows that the primary cause of the contrast reversals is not the choice of eigenmode itself (e.g., second versus fourth), but rather the relative kinetic energy between the higher eigenmode and the first eigenmode. This leads to the identification of three distinct imaging regimes in bimodal AFM. This result, which is applicable even to traditional bimodal AFM, should allow researchers to choose cantilever and operating parameters in a more rational manner in order to optimize resolution and contrast during nanoscale imaging of materials.
There are now many recipients of unilateral cochlear implants who have usable residual hearing in the nonimplanted ear. To avoid auditory deprivation and to provide binaural hearing, a hearing aid or a second cochlear implant can be fitted to that ear. This article addresses the question of whether better binaural hearing can be achieved with binaural/bimodal fitting (combining a cochlear implant and a hearing aid in opposite ears) or bilateral implantation. In the first part of this article, the rationale for providing binaural hearing is examined. In the second part, the literature on the relative efficacy of binaural/bimodal fitting and bilateral implantation is reviewed. Most studies on comparing either mode of bilateral stimulation with unilateral implantation reported some binaural benefits in some test conditions on average but revealed that some individuals benefited, whereas others did not. There were no controlled comparisons between binaural/bimodal fitting and bilateral implantation and no evidence to support the efficacy of one mode over the other. In the third part of the article, a crossover trial of two adults who had binaural/bimodal fitting and who subsequently received a second implant is reported. The findings at 6 and 12 months after they received their second implant indicated that binaural function developed over time...
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.
Gene expression can be highly heterogeneous in isogenic cell populations. An extreme type of heterogeneity is the so-called bistable or bimodal expression, whereby a cell can differentiate into two alternative expression states. Stochastic fluctuations of protein levels, also referred to as noise, provide the necessary source of heterogeneity that must be amplified by specific genetic circuits in order to obtain a bimodal response. A classical model of bimodal differentiation is the activation of genetic competence in Bacillus subtilis. The competence transcription factor ComK activates transcription of its own gene, and an intricate regulatory network controls the switch to competence and ensures its reversibility. However, it is noise in ComK expression that determines which cells activate the ComK autostimulatory loop and become competent for genetic transformation. Despite its important role in bimodal gene expression, noise remains difficult to investigate due to its inherent stochastic nature. We adapted an artificial autostimulatory loop that bypasses all known ComK regulators to screen for possible factors that affect noise. This led to the identification of a novel protein Kre (YkyB) that controls the bimodal regulation of ComK. Interestingly...
Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children. Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending—expressions in both speech and sign simultaneously—an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children's language choices. This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations...
Early bilingual exposure, especially exposure to two languages in different modalities such as speech and sign, can profoundly affect an individual's language, culture, and cognition. Here we explore the hypothesis that bimodal dual language exposure can also affect the brain's organization for language. These changes occur across brain regions universally important for language and parietal regions especially critical for sign language (Newman et al., 2002). We investigated three groups of participants (N = 29) that completed a word repetition task in American Sign Language (ASL) during fNIRS brain imaging. Those groups were (1) hearing ASL-English bimodal bilinguals (n = 5), (2) deaf ASL signers (n = 7), and (3) English monolinguals naïve to sign language (n = 17). The key finding of the present study is that bimodal bilinguals showed reduced activation in left parietal regions relative to deaf ASL signers when asked to use only ASL. In contrast, this group of bimodal signers showed greater activation in left temporo-parietal regions relative to English monolinguals when asked to switch between their two languages (Kovelman et al., 2009). Converging evidence now suggest that bimodal bilingual experience changes the brain bases of language...
There is no doubt that cochlear implants have improved the spoken language abilities of children with hearing loss, but delays persist. Consequently, it is imperative that new treatment options be explored. This study evaluated one aspect of treatment that might be modified, that having to do with bilateral implants and bimodal stimulation. A total of 58 children with at least one implant were tested at 42 months of age on four language measures spanning a continuum from basic to generative in nature. When children were grouped by the kind of stimulation they had at 42 months (one implant, bilateral implants, or bimodal stimulation), no differences across groups were observed. This was true even when groups were constrained to only children who had at least 12 months to acclimatize to their stimulation configuration. However, when children were grouped according to whether or not they had spent any time with bimodal stimulation (either consistently since their first implant or as an interlude to receiving a second) advantages were found for children who had some bimodal experience, but those advantages were restricted to language abilities that are generative in nature. Thus, previously reported benefits of simultaneous bilateral implantation early in a child's life may not extend to generative language. In fact...
The globular cluster (GC) systems of many galaxies reveal bimodal optical
color distributions. Based on stellar evolutionary models and the bimodal
colors and metallicities of Galactic GCs this is thought to reflect an
underlying bimodal metallicity distribution. However, stars at many different
phases of stellar evolution contribute to optical light. The I-H color is a
much cleaner tracer of metallicity because it primarily samples the metallicity
sensitive giant branch. Therefore, we use deep HST-NICMOS H, and WFPC2 optical
observations, of M87 GCs to study their metallicity distribution. The M87
clusters are bimodal in I-H, for which there is no known physical explanation
other than a bimodal metallicity distribution. Moreover, the two modes defined
by the B-I and I-H colors are comprised of roughly the same two sets of
objects, confirming that optical colors also primarily trace the metallicity.
This is inconsistent with a recent suggestion based on one model of metallicity
effects on the horizontal branch that bimodality arises from an underlying
unimodal metallicity distribution due to a specific color-metallicity relation.
We also find no discernable variation in the peak colors of the M87 GCs out to
roughly 75 kpc due to the declining ratio of red-to-blue GCs...
First-order temporal logics are notorious for their bad computational
behaviour. It is known that even the two-variable monadic fragment is highly
undecidable over various linear timelines, and over branching time even
one-variable fragments might be undecidable. However, there have been several
attempts on finding well-behaved fragments of first-order temporal logics and
related temporal description logics, mostly either by restricting the available
quantifier patterns, or considering sub-Boolean languages. Here we analyse
seemingly `mild' extensions of decidable one-variable fragments with counting
capabilities, interpreted in models with constant, decreasing, and expanding
first-order domains. We show that over most classes of linear orders these
logics are (sometimes highly) undecidable, even without constant and function
symbols, and with the sole temporal operator `eventually'.
We establish connections with bimodal logics over 2D product structures
having linear and `difference' (inequality) component relations, and prove our
results in this bimodal setting. We show a general result saying that
satisfiability over many classes of bimodal models with commuting linear and
difference relations is undecidable. As a by-product, we also obtain new
examples of finitely axiomatisable but Kripke incomplete bimodal logics. Our
results generalise similar lower bounds on bimodal logics over products of two