Search results for: inter-hemispheric asymmetry
21 AAV-Mediated Human Α-Synuclein Expression in a Rat Model of Parkinson's Disease –Further Characterization of PD Phenotype, Fine Motor Functional Effects as Well as Neurochemical and Neuropathological Changes over Time
Authors: R. Pussinen, V. Jankovic, U. Herzberg, M. Cerrada-Gimenez, T. Huhtala, A. Nurmi, T. Ahtoniemi
Abstract:
Targeted over-expression of human α-synuclein using viral-vector mediated gene delivery into the substantia nigra of rats and non-human primates has been reported to lead to dopaminergic cell loss and the formation of α-synuclein aggregates reminiscent of Lewy bodies. We have previously shown how AAV-mediated expression of α-synuclein is seen in the chronic phenotype of the rats over 16 week follow-up period. In the context of these findings, we attempted to further characterize this long term PD related functional and motor deficits as well as neurochemical and neuropathological changes in AAV-mediated α-synuclein transfection model in rats during chronic follow-up period. Different titers of recombinant AAV expressing human α-synuclein (A53T) were stereotaxically injected unilaterally into substantia nigra of Wistar rats. Rats were allowed to recover for 3 weeks prior to initial baseline behavioral testing with rotational asymmetry test, stepping test and cylinder test. A similar behavioral test battery was applied again at weeks 5, 9,12 and 15. In addition to traditionally used rat PD model tests, MotoRater test system, a high speed kinematic gait performance monitoring was applied during the follow-up period. Evaluation focused on animal gait between groups. Tremor analysis was performed on weeks 9, 12 and 15. In addition to behavioral end-points, neurochemical evaluation of dopamine and its metabolites were evaluated in striatum. Furthermore, integrity of the dopamine active transport (DAT) system was evaluated by using 123I- β-CIT and SPECT/CT imaging on weeks 3, 8 and 12 after AAV- α-synuclein transfection. Histopathology was examined from end-point samples at 3 or 12 weeks after AAV- α-synuclein transfection to evaluate dopaminergic cell viability and microglial (Iba-1) activation status in substantia nigra by using stereological analysis techniques. This study focused on the characterization and validation of previously published AAV- α-synuclein transfection model in rats but with the addition of novel end-points. We present the long term phenotype of AAV- α-synuclein transfected rats with traditionally used behavioral tests but also by using novel fine motor analysis techniques and tremor analysis which provide new insight to unilateral effects of AAV α-synuclein transfection. We also present data about neurochemical and neuropathological end-points for the dopaminergic system in the model and how well they correlate with behavioral phenotype.Keywords: adeno-associated virus, alphasynuclein, animal model, Parkinson’s disease
Procedia PDF Downloads 29520 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 29019 Violence against Women: A Study on the Aggressors' Profile
Authors: Giovana Privatte Maciera, Jair Izaías Kappann
Abstract:
Introduction: The violence against woman is a complex phenomenon that accompanies the woman throughout her life and is a result of a social, cultural, political and religious construction, based on the differences among the genders. Those differences are felt, mainly, because of the patriarchal system that is still present which just naturalize and legitimate the asymmetry of power. As consequence of the women’s lasting historical and collective effort for a legislation against the impunity of violence against women in the national scenery, it was ordained, in 2006, a law known as Maria da Penha. The law was created as a protective measure for women that were victims of violence and consequently for the punishment of the aggressor. Methodology: Analysis of police inquiries is established by the Police Station of Defense of the Woman of Assis city, by formal authorization of the justice, in the period of 2013 to 2015. For the evaluating of the results will be used the content analysis and the theoretical referential of Psychoanalysis. Results and Discussion: The final analysis of the inquiries demonstrated that the violence against women is reproduced by the society and the aggressor, in most cases it is a member of their own family, mainly the current or former-spouse. The most common kinds of aggression were: the threat bodily harm, and the physical violence, that normally happens accompanied by psychological violence, being the most painful for the victims. The biggest part of the aggressors was white, older than the victim, worker and had primary school. But, unlike the expected, the minority of the aggressors were users of alcohol and/or drugs and possessed children in common with the victim. There is a contrast among the number of victims who already admitted have suffered some type of violence earlier by the same aggressor and the number of victims who has registered the occurrence before. The aggressors often use the discourse of denial in their testimony or try to justify their act like the blame was of the victim. It is believed in the interaction of several factors that can influence the aggressor to commit the abuse, including psychological, personal and sociocultural factors. One hypothesis is that the aggressor has a violence history in the family origin. After the aggressor being judged, condemned or not, usually there is no rehabilitation plan or supervision that enable his change. Conclusions: It has noticed the importance of studying the aggressor’s characteristics and the reasons that took him to commit such violence, making possible the implementation of an appropriate treatment to prevent and reduce the aggressions, as well the creation of programs and actions that enable communication and understanding concerning the theme. This is because the recurrence is still high, since the punitive system is not enough and the law is still ineffective and inefficient in certain aspects and in its own functioning. It is perceived a compulsion in repeat so much for the victims as for the aggressors, because they end involving, almost always, in disturbed and violent relationships, with the relation of subordination-dominance as characteristic.Keywords: aggressors' profile, gender equality, Maria da Penha law, violence against women
Procedia PDF Downloads 33418 Power Asymmetry and Major Corporate Social Responsibility Projects in Mhondoro-Ngezi District, Zimbabwe
Authors: A. T. Muruviwa
Abstract:
Empirical studies of the current CSR agenda have been dominated by literature from the North at the expense of the nations from the South where most TNCs are located. Therefore, owing to the limitations of the current discourse that is dominated by Western ideas such as voluntarism, philanthropy, business case and economic gains, scholars have been calling for a new CSR agenda that is South-centred and addresses the needs of developing nations. The development theme has dominated in the recent literature as scholars concerned with the relationship between business and society have tried to understand its relationship with CSR. Despite a plethora of literature on the roles of corporations in local communities and the impact of CSR initiatives, there is lack of adequate empirical evidence to help us understand the nexus between CSR and development. For all the claims made about the positive and negative consequences of CSR, there is surprisingly little information about the outcomes it delivers. This study is a response to these claims made about the developmental aspect of CSR in developing countries. It offers some empirical bases for assessing the major CSR projects that have been fulfilled by a major mining company, Zimplats in Mhondoro-Ngezi Zimbabwe. The neo-liberal idea of capitalism and market dominations has empowered TNCs to stamp their authority in the developing countries. TNCs have made their mark in developing nations as they stamp their global private authority, rivalling or implicitly challenging the state in many functions. This dominance of corporate power raises great concerns over their tendencies of abuses in terms of environmental, social and human rights concerns as well as how to make them increasingly accountable. The hegemonic power of TNCs in the developing countries has had a tremendous impact on the overall CSR practices. While TNCs are key drivers of globalization they may be acting responsibly in their Global Northern home countries where there is a combination of legal mechanisms and the fear of civil society activism associated with corporate scandals. Using a triangulated approach in which both qualitative and quantitative methods were used the study found out that most CSR projects in Zimbabwe are dominated and directed by Zimplats because of the power it possesses. Most of the major CSR projects are beneficial to the mining company as they serve the business plans of the mining company. What was deduced from the study is that the infrastructural development initiatives by Zimplats confirm that CSR is a tool to advance business obligations. This shows that although proponents of CSR might claim that business has a mandate for social obligations to society, we need not to forget the dominant idea that the primary function of CSR is to enhance the firm’s profitability.Keywords: hegemonic power, projects, reciprocity, stakeholders
Procedia PDF Downloads 25417 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 39716 Traditional Medicine in Children: A Significant Cause of Morbidity and Mortality
Authors: Atitallah Sofien, Bouyahia Olfa, Romdhani Meriam, Missaoui Nada, Ben Rabeh Rania, Yahyaoui Salem, Mazigh Sonia, Boukthir Samir
Abstract:
Introduction: Traditional medicine refers to a diverse range of therapeutic practices and knowledge systems that have been employed by different cultures over an extended period to uphold and rejuvenate health. These practices can involve herbal remedies, acupuncture, massage, and alternative healing methods that deviate from conventional medical approaches. In Tunisia, we often use unidentified utensils to scratch the oral cavity internally in infants in order to widen the oral cavity for better breathing and swallowing. However, these practices can be risky and may jeopardize the patients' prognosis or even their lives. Aim: This is the case of a nine-month-old infant, admitted to the pediatric department and subsequently to the intensive care unit due to a peritonsillar abscess following the utilization of an unidentifiable tool to scrape the interior of the oral cavity. Case Report: This is a 9-month-old infant with no particular medical history, admitted for high respiratory distress and a fever persisting for 4 days. On clinical examination, he had a respiratory rate of 70 cycles per minute with an oxygen saturation of 97% and subcostal retractions, along with a heart rate of 175 beats per minute. His white blood cell count was 40,960/mm³, and his C-reactive protein was 250 mg/L. Given the severity of the clinical presentation, the infant was transferred to the intensive care unit, intubated, and mechanically ventilated. A cervical-thoracic CT scan was performed, revealing a ruptured 18 mm left peritonsillar abscess in the oropharynx associated with cellulitis of the retropharyngeal space. The oto-rhino-laryngoscopic examination revealed an asymmetry involving the left lateral wall of the oropharynx with the presence of a fistula behind the posterior pillar. Dissection of the collection cavity was performed, allowing the drainage of 2 ml of pus. The culture was negative. The patient received cefotaxime in combination with metronidazole and gentamicin for a duration of 10 days, followed by a switch to amoxicillin-clavulanic acid for 7 days. The patient was extubated after 4 days of treatment, and the clinical and radiological progress was favorable. Conclusions: Traditional medicine remains risky due to the lack of scientific evidence and the potential for injuries and transmission of infectious diseases, especially in children, who constitute a vulnerable population. Therefore, parents should consult healthcare professionals and rely on evidence-based care.Keywords: children, peritonsillar abscess, traditional medicine, respiratory distress
Procedia PDF Downloads 6315 Formation of the Water Assisted Supramolecular Assembly in the Transition Structure of Organocatalytic Asymmetric Aldol Reaction: A DFT Study
Authors: Kuheli Chakrabarty, Animesh Ghosh, Atanu Roy, Gourab Kanti Das
Abstract:
Aldol reaction is an important class of carbon-carbon bond forming reactions. One of the popular ways to impose asymmetry in aldol reaction is the introduction of chiral auxiliary that binds the approaching reactants and create dissymmetry in the reaction environment, which finally evolves to enantiomeric excess in the aldol products. The last decade witnesses the usage of natural amino acids as chiral auxiliary to control the stereoselectivity in various carbon-carbon bond forming processes. In this context, L-proline was found to be an effective organocatalyst in asymmetric aldol additions. In last few decades the use of water as solvent or co-solvent in asymmetric organocatalytic reaction is increased sharply. Simple amino acids like L-proline does not catalyze asymmetric aldol reaction in aqueous medium not only that, In organic solvent medium high catalytic loading (~30 mol%) is required to achieve moderate to high asymmetric induction. In this context, huge efforts have been made to modify L-proline and 4-hydroxy-L-proline to prepare organocatalyst for aqueous medium asymmetric aldol reaction. Here, we report the result of our DFT calculations on asymmetric aldol reaction of benzaldehyde, p-NO2 benzaldehyde and t-butyraldehyde with a number of ketones using L-proline hydrazide as organocatalyst in wet solvent free condition. Gaussian 09 program package and Gauss View program were used for the present work. Geometry optimizations were performed using B3LYP hybrid functional and 6-31G(d,p) basis set. Transition structures were confirmed by hessian calculation and IRC calculation. As the reactions were carried out in solvent free condition, No solvent effect were studied theoretically. Present study has revealed for the first time, the direct involvement of two water molecules in the aldol transition structures. In the TS, the enamine and the aldehyde is connected through hydrogen bonding by the assistance of two intervening water molecules forming a supramolecular network. Formation of this type of supramolecular assembly is possible due to the presence of protonated -NH2 group in the L-proline hydrazide moiety, which is responsible for the favorable entropy contribution to the aldol reaction. It is also revealed from the present study that, water assisted TS is energetically more favorable than the TS without involving any water molecule. It can be concluded from this study that, insertion of polar group capable of hydrogen bond formation in the L-proline skeleton can lead to a favorable aldol reaction with significantly high enantiomeric excess in wet solvent free condition by reducing the activation barrier of this reaction.Keywords: aldol reaction, DFT, organocatalysis, transition structure
Procedia PDF Downloads 43414 Modeling of Alpha-Particles’ Epigenetic Effects in Short-Term Test on Drosophila melanogaster
Authors: Z. M. Biyasheva, M. Zh. Tleubergenova, Y. A. Zaripova, A. L. Shakirov, V. V. Dyachkov
Abstract:
In recent years, interest in ecogenetic and biomedical problems related to the effects on the population of radon and its daughter decay products has increased significantly. Of particular interest is the assessment of the consequence of irradiation at hazardous radon areas, which includes the Almaty region due to the large number of tectonic faults that enhance radon emanation. In connection with the foregoing, the purpose of this work was to study the genetic effects of exposure to supernormal radon doses on the alpha-radiation model. Irradiation does not affect the growth of the cell, but rather its ability to differentiate. In addition, irradiation can lead to somatic mutations, morphoses and modifications. These damages most likely occur from changes in the composition of the substances of the cell. Such changes are epigenetic since they affect the regulatory processes of ontogenesis. Variability in the expression of regulatory genes refers to conditional mutations that modify the formation of signs of intraspecific similarity. Characteristic features of these conditional mutations are the dominant type of their manifestation, phenotypic asymmetry and their instability in the generations. Currently, the terms “morphosis” and “modification” are used to describe epigenetic variability, which are maintained in Drosophila melanogaster cultures using linkaged X- chromosomes, and the mutant X-chromosome is transmitted along the paternal line. In this paper, we investigated the epigenetic effects of alpha particles, whose source in nature is mainly radon and its daughter decay products. In the experiment, an isotope of plutonium-238 (Pu238), generating radiation with an energy of about 5500 eV, was used as a source of alpha particles. In an experiment in the first generation (F1), deformities or morphoses were found, which can be called "radiation syndromes" or mutations, the manifestation of which is similar to the pleiotropic action of genes. The proportion of morphoses in the experiment was 1.8%, and in control 0.4%. In this experiment, the morphoses in the flies of the first and second generation looked like black spots, or melanomas on different parts of the imago body; "generalized" melanomas; curled, curved wings; shortened wing; bubble on one wing; absence of one wing, deformation of thorax, interruption and violation of tergite patterns, disruption of distribution of ocular facets and bristles; absence of pigmentation of the second and third legs. Statistical analysis by the Chi-square method showed the reliability of the difference in experiment and control at P ≤ 0.01. On the basis of this, it can be considered that alpha particles, which in the environment are mainly generated by radon and its isotopes, have a mutagenic effect that manifests itself, mainly in the formation of morphoses or deformities.Keywords: alpha-radiation, genotoxicity, morphoses, radioecology, radon
Procedia PDF Downloads 15213 Spectroscopy and Electron Microscopy for the Characterization of CdSxSe1-x Quantum Dots in a Glass Matrix
Authors: C. Fornacelli, P. Colomban, E. Mugnaioli, I. Memmi Turbanti
Abstract:
When semiconductor particles are reduced in scale to nanometer dimension, their optical and electro-optical properties strongly differ from those of bulk crystals of the same composition. Since sampling is often not allowed concerning cultural heritage artefacts, the potentialities of two non-invasive techniques, such as Raman and Fiber Optic Reflectance Spectroscopy (FORS), have been investigated and the results of the analysis on some original glasses of different colours (from yellow to orange and deep red) and periods (from the second decade of the 20th century to present days) are reported in the present study. In order to evaluate the potentialities of the application of non-invasive techniques to the investigation of the structure and distribution of nanoparticles dispersed in a glass matrix, Scanning Electron Microscopy (SEM) and energy-disperse spectroscopy (EDS) mapping, together with Transmission Electron Microscopy (TEM) and Electron Diffraction Tomography (EDT) have also been used. Raman spectroscopy allows a fast and non-destructive measure of the quantum dots composition and size, thanks to the evaluation of the frequencies and the broadening/asymmetry of the LO phonons bands, respectively, though the important role of the compressive strain arising from the glass matrix and the possible diffusion of zinc from the matrix to the nanocrystals should be taken into account when considering the optical-phonons frequency values. The incorporation of Zn has been assumed by an upward shifting of the LO band related to the most abundant anion (S or Se), while the role of the surface phonons as well as the confinement-induced scattering by phonons with a non-zero wavevectors on the Raman peaks broadening has been verified. The optical band gap varies from 2.42 eV (pure CdS) to 1.70 eV (CdSe). For the compositional range between 0.5≤x≤0.2, the presence of two absorption edges has been related to the contribution of both pure CdS and the CdSxSe1-x solid solution; this particular feature is probably due to the presence of unaltered cubic zinc blende structures of CdS that is not taking part to the formation of the solid solution occurring only between hexagonal CdS and CdSe. Moreover, the band edge tailing originating from the disorder due to the formation of weak bonds and characterized by the Urbach edge energy has been studied and, together with the FWHM of the Raman signal, has been assumed as a good parameter to evaluate the degree of topological disorder. SEM-EDS mapping showed a peculiar distribution of the major constituents of the glass matrix (fluxes and stabilizers), especially concerning those samples where a layered structure has been assumed thanks to the spectroscopic study. Finally, TEM-EDS and EDT were used to get high-resolution information about nanocrystals (NCs) and heterogeneous glass layers. The presence of ZnO NCs (< 4 nm) dispersed in the matrix has been verified for most of the samples, while, for those samples where a disorder due to a more complex distribution of the size and/or composition of the NCs has been assumed, the TEM clearly verified most of the assumption made by the spectroscopic techniques.Keywords: CdSxSe1-x, EDT, glass, spectroscopy, TEM-EDS
Procedia PDF Downloads 29912 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 63911 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries
Authors: Eva Masson, Andrea Kübler
Abstract:
Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG
Procedia PDF Downloads 3810 Corporate Governance and Disclosure Practices of Listed Companies in the ASEAN: A Conceptual Overview
Authors: Chen Shuwen, Nunthapin Chantachaimongkol
Abstract:
Since the world has moved into a transitional period, known as globalization; the business environment is now more complicated than ever before. Corporate information has become a matter of great importance for stakeholders, in order to understand the current situation. As a result of this, the concept of corporate governance has been broadly introduced to manage and control the affairs of corporations while businesses are required to disclose both financial and non-financial information to public via various communication channels such as the annual report, the financial report, the company’s website, etc. However, currently there are several other issues related to asymmetric information such as moral hazard or adverse selection that still occur intensively in workplaces. To prevent such problems in the business, it is required to have an understanding of what factors strengthen their transparency, accountability, fairness, and responsibility. Under aforementioned arguments, this paper aims to propose a conceptual framework that enables an investigation on how corporate governance mechanism influences disclosure efficiency of listed companies in the Association of Southeast Asia Nations (ASEAN) and the factors that should be considered for further development of good behaviors, particularly in regards to voluntary disclosure practices. To achieve its purpose, extensive reviews of literature are applied as a research methodology. It is divided into three main steps. Firstly, the theories involved with both corporate governance and disclosure practices such as agency theory, contract theory, signaling theory, moral hazard theory, and information asymmetry theory are examined to provide theoretical backgrounds. Secondly, the relevant literatures based on multi- perspectives of corporate governance, its attributions and their roles on business processes, the influences of corporate governance mechanisms on business performance, and the factors determining corporate governance characteristics as well as capability are reviewed to outline the parameters that should be included in the proposed model. Thirdly, the well-known regulatory document OECD principles and previous empirical studies on the corporate disclosure procedures are evaluated to identify the similarities and differentiations with the disclosure patterns in the ASEAN. Following the processes and consequences of the literature review, abundant factors and variables are found. Further to the methodology, additional critical factors that also have an impact on the disclosure behaviors are addressed in two groups. In the first group, the factors which are linked to the national characteristics - the quality of national code, legal origin, culture, the level of economic development, and so forth. Whereas in the second group, the discoveries which refer to the firm’s characteristics - ownership concentration, ownership’s rights, controlling group, and so on. However, because of research limitations, only some literature are chosen and summarized to form part of the conceptual framework that explores the relationship between corporate governance and the disclosure practices of listed companies in ASEAN.Keywords: corporate governance, disclosure practice, ASEAN, listed company
Procedia PDF Downloads 1929 Graphene Metamaterials Supported Tunable Terahertz Fano Resonance
Authors: Xiaoyong He
Abstract:
The manipulation of THz waves is still a challenging task due to lack of natural materials interacted with it strongly. Designed by tailoring the characters of unit cells (meta-molecules), the advance of metamaterials (MMs) may solve this problem. However, because of Ohmic and radiation losses, the performance of MMs devices is subjected to the dissipation and low quality factor (Q-factor). This dilemma may be circumvented by Fano resonance, which arises from the destructive interference between a bright continuum mode and dark discrete mode (or a narrow resonance). Different from symmetric Lorentz spectral curve, Fano resonance indicates a distinct asymmetric line-shape, ultrahigh quality factor, steep variations in spectrum curves. Fano resonance is usually realized through symmetry breaking. However, if concentric double rings (DR) are placed closely to each other, the near-field coupling between them gives rise to two hybridized modes (bright and narrowband dark modes) because of the local asymmetry, resulting into the characteristic Fano line shape. Furthermore, from the practical viewpoint, it is highly desirable requirement that to achieve the modulation of Fano spectral curves conveniently, which is an important and interesting research topics. For current Fano systems, the tunable spectral curves can be realized by adjusting the geometrical structural parameters or magnetic fields biased the ferrite-based structure. But due to limited dispersion properties of active materials, it is still a tough work to tailor Fano resonance conveniently with the fixed structural parameters. With the favorable properties of extreme confinement and high tunability, graphene is a strong candidate to achieve this goal. The DR-structure possesses the excitation of so-called “trapped modes,” with the merits of simple structure and high quality of resonances in thin structures. By depositing graphene circular DR on the SiO2/Si/ polymer substrate, the tunable Fano resonance has been theoretically investigated in the terahertz regime, including the effects of graphene Fermi level, structural parameters and operation frequency. The results manifest that the obvious Fano peak can be efficiently modulated because of the strong coupling between incident waves and graphene ribbons. As Fermi level increases, the peak amplitude of Fano curve increases, and the resonant peak position shifts to high frequency. The amplitude modulation depth of Fano curves is about 30% if Fermi level changes in the scope of 0.1-1.0 eV. The optimum gap distance between DR is about 8-12 μm, where the value of figure of merit shows a peak. As the graphene ribbon width increases, the Fano spectral curves become broad, and the resonant peak denotes blue shift. The results are very helpful to develop novel graphene plasmonic devices, e.g. sensors and modulators.Keywords: graphene, metamaterials, terahertz, tunable
Procedia PDF Downloads 3448 Averting a Financial Crisis through Regulation, Including Legislation
Authors: Maria Krambia-Kapardis, Andreas Kapardis
Abstract:
The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.Keywords: financial crisis, legislation, regulation, financial regulation
Procedia PDF Downloads 3987 Disrupting Traditional Industries: A Scenario-Based Experiment on How Blockchain-Enabled Trust and Transparency Transform Nonprofit Organizations
Authors: Michael Mertel, Lars Friedrich, Kai-Ingo Voigt
Abstract:
Based on principle-agent theory, an information asymmetry exists in the traditional donation process. Consumers cannot comprehend whether nonprofit organizations (NPOs) use raised funds according to the designated cause after the transaction took place (hidden action). Therefore, charity organizations have tried to appear transparent and gain trust by using the same marketing instruments for decades (e.g., releasing project success reports). However, none of these measures can guarantee consumers that charities will use their donations for the purpose. With awareness of misuse of donations rising due to the Ukraine conflict (e.g., funding crime), consumers are increasingly concerned about the destination of their charitable purposes. Therefore, innovative charities like the Human Rights Foundation have started to offer donations via blockchain. Blockchain technology has the potential to establish profound trust and transparency in the donation process: Consumers can publicly track the progress of their donation at any time after deciding to donate. This ensures that the charity is not using donations against its original intent. Hence, the aim is to investigate the effect of blockchain-enabled transactions on the willingness to donate. Sample and Design: To investigate consumers' behavior, we use a scenario-based experiment. After removing participants (e.g., due to failed attention checks), 3192 potential donors participated (47.9% female, 62.4% bachelor or above). Procedure: We randomly assigned the participants to one of two scenarios. In all conditions, the participants read a scenario about a fictive charity organization called "Helper NPO." Afterward, the participants answered questions regarding their perception of the charity. Manipulation: The first scenario (n = 1405) represents a typical donation process, where consumers donate money without any option to track and trace. The second scenario (n = 1787) represents a donation process via blockchain, where consumers can track and trace their donations respectively. Using t-statistics, the findings demonstrate a positive effect of donating via blockchain on participants’ willingness to donate (mean difference = 0.667, p < .001, Cohen’s d effect size = 0.482). A mediation analysis shows significant effects for the mediation of transparency (Estimate = 0.199, p < .001), trust (Estimate = 0.144, p < .001), and transparency and trust (Estimate = 0.158, p < .001). The total effect of blockchain usage on participants’ willingness to donate (Estimate = 0.690, p < .001) consists of the direct effect (Estimate = 0.189, p < .001) and the indirect effects of transparency and trust (Estimate = 0.501, p < .001). Furthermore, consumers' affinity for technology moderates the direct effect of blockchain usage on participants' willingness to donate (Estimate = 0.150, p < .001). Donating via blockchain is a promising way for charities to engage consumers for several reasons: (1) Charities can emphasize trust and transparency in their advertising campaigns. (2) Established charities can target new customer segments by specifically engaging technology-affine consumers in the future. (3) Charities can raise international funds without previous barriers (e.g., setting up bank accounts). Nevertheless, increased transparency can also backfire (e.g., disclosure of costs). Such cases require further research.Keywords: blockchain, social sector, transparency, trust
Procedia PDF Downloads 996 Investigations on the Fatigue Behavior of Welded Details with Imperfections
Authors: Helen Bartsch, Markus Feldmann
Abstract:
The dimensioning of steel structures subject to fatigue loads, such as wind turbines, bridges, masts and towers, crane runways and weirs or components in crane construction, is often dominated by fatigue verification. The fatigue details defined by the welded connections, such as butt or cruciform joints, longitudinal welds, welded-on or welded-in stiffeners, etc., are decisive. In Europe, the verification is usually carried out according to EN 1993-1-9 on a nominal stress basis. The basis is the detailed catalog, which specifies the fatigue strength of the various weld and construction details according to fatigue classes. Until now, a relation between fatigue classes and weld imperfection sizes is not included. Quality levels for imperfections in fusion-welded joints in steel, nickel, titanium and their alloys are regulated in EN ISO 5817, which, however, doesn’t contain direct correlations to fatigue resistances. The question arises whether some imperfections might be tolerable to a certain extent since they may be present in the test data used for detail classifications dating back decades ago. Although current standardization requires proof of satisfying limits of imperfection sizes, it would also be possible to tolerate welds with certain irregularities if these can be reliably quantified by non-destructive testing. Fabricators would be prepared to undertake carefully and sustained weld inspection in view of the significant economic consequences of such unfavorable fatigue classes. This paper presents investigations on the fatigue behavior of common welded details containing imperfections. In contrast to the common nominal stress concept, local fatigue concepts were used to consider the true stress increase, i.e., local stresses at the weld toe and root. The actual shape of a weld comprising imperfections, e.g., gaps or undercuts, can be incorporated into the fatigue evaluation, usually on a numerical basis. With the help of the effective notch stress concept, the fatigue resistance of detailed local weld shapes is assessed. Validated numerical models serve to investigate notch factors of fatigue details with different geometries. By utilizing parametrized ABAQUS routines, detailed numerical studies have been performed. Depending on the shape and size of different weld irregularities, fatigue classes can be defined. As well load-carrying welded details, such as the cruciform joint, as non-load carrying welded details, e.g., welded-on or welded-in stiffeners, are regarded. The investigated imperfections include, among others, undercuts, excessive convexity, incorrect weld toe, excessive asymmetry and insufficient or excessive throat thickness. Comparisons of the impact of different imperfections on the different types of fatigue details are made. Moreover, the influence of a combination of crucial weld imperfections on the fatigue resistance is analyzed. With regard to the trend of increasing efficiency in steel construction, the overall aim of the investigations is to include a more economical differentiation of fatigue details with regard to tolerance sizes. In the long term, the harmonization of design standards, execution standards and regulations of weld imperfections is intended.Keywords: effective notch stress, fatigue, fatigue design, weld imperfections
Procedia PDF Downloads 2595 Piezotronic Effect on Electrical Characteristics of Zinc Oxide Varistors
Authors: Nadine Raidl, Benjamin Kaufmann, Michael Hofstätter, Peter Supancic
Abstract:
If polycrystalline ZnO is properly doped and sintered under very specific conditions, it shows unique electrical properties, which are indispensable for today’s electronic industries, where it is used as the number one overvoltage protection material. Under a critical voltage, the polycrystalline bulk exhibits high electrical resistance but becomes suddenly up to twelve magnitudes more conductive if this voltage limit is exceeded (i.e., varistor effect). It is known that these peerless properties have their origin in the grain boundaries of the material. Electric charge is accumulated in the boundaries, causing a depletion layer in their vicinity and forming potential barriers (so-called Double Schottky Barriers, or DSB) which are responsible for the highly non-linear conductivity. Since ZnO is a piezoelectric material, mechanical stresses induce polarisation charges that modify the DSB heights and as a result the global electrical characteristics (i.e., piezotronic effect). In this work, a finite element method was used to simulate emerging stresses on individual grains in the bulk. Besides, experimental efforts were made to testify a coherent model that could explain this influence. Electron back scattering diffraction was used to identify grain orientations. With the help of wet chemical etching, grain polarization was determined. Micro lock-in infrared thermography (MLIRT) was applied to detect current paths through the material, and a micro 4-point probes method system (M4PPS) was employed to investigate current-voltage characteristics between single grains. Bulk samples were tested under uniaxial pressure. It was found that the conductivity can increase by up to three orders of magnitude with increasing stress. Through in-situ MLIRT, it could be shown that this effect is caused by the activation of additional current paths in the material. Further, compressive tests were performed on miniaturized samples with grain paths containing solely one or two grain boundaries. The tests evinced both an increase of the conductivity, as observed for the bulk, as well as a decreased conductivity. This phenomenon has been predicted theoretically and can be explained by piezotronically induced surface charges that have an impact on the DSB at the grain boundaries. Depending on grain orientation and stress direction, DSB can be raised or lowered. Also, the experiments revealed that the conductivity within one single specimen can increase and decrease, depending on the current direction. This novel finding indicates the existence of asymmetric Double Schottky Barriers, which was furthermore proved by complementary methods. MLIRT studies showed that the intensity of heat generation within individual current paths is dependent on the direction of the stimulating current. M4PPS was used to study the relationship between the I-V characteristics of single grain boundaries and grain orientation and revealed asymmetric behavior for very specific orientation configurations. A new model for the Double Schottky Barrier, taking into account the natural asymmetry and explaining the experimental results, will be given.Keywords: Asymmetric Double Schottky Barrier, piezotronic, varistor, zinc oxide
Procedia PDF Downloads 2674 Oncoplastic Augmentation Mastopexy: Aesthetic Revisional Surgery in Breast Conserving Therapy
Authors: Bar Y. Ainuz, Harry M. Salinas, Aleeza Ali, Eli B. Levitt, Austin J. Pourmoussa, Antoun Bouz, Miguel A. Medina
Abstract:
Introduction: Breast conservation therapy remains the mainstay surgical treatment for early breast cancer. Oncoplastic techniques, in conjunction with lumpectomy and adjuvant radiotherapy, have been demonstrated to achieve good aesthetic results without adversely affecting cancer outcomes in the treatment of patients with macromastia or significant ptosis. In our patient population, many women present for breast conservation with pre-existing cosmetic implants or with breast volumes too small for soft tissue, only oncoplastic techniques. Our study evaluated a consecutive series of patients presenting for breast conservation undergoing concomitant oncoplastic-augmentation-mastopexy (OAM) with a contralateral augmentation-mastopexy for symmetry. Methods: OAM surgical technique involves simultaneous lumpectomy with exchange or placement of implants, oncoplastic mastopexy, and concomitant contralateral augmentation mastopexy for symmetry. Patients undergoing lumpectomy for breast conservation as outpatients were identified via retrospective chart review at a high volume private academic affiliated community-based cancer center. Patients with ptosis and either pre-existing breast implants or insufficient breast volume undergoing oncoplastic implant placement (or exchange) and mastopexy were included in the study. Operative details, aesthetic outcomes, and complications were assessed. Results: Over a continuous three-year period, with a two-surgeon cohort, 30 consecutive patients (56 breasts, 4 unilateral procedures) were identified. Patients had an average age of 52.5 years and an average BMI of 27.5, with 40% smokers or former smokers. The average operative time was 2.5 hours, the average implant size removed was 352 cc, and the average implant size placed was 300 cc. All new implants were smooth silicone, with the majority (92%) placed in a retropectoral fashion. 40% of patients received chemotherapy, and 80% of patients received whole breast adjuvant photon radiotherapy with a total radiation dose of either 42.56 or 52.56 Gy. The average and median length of follow-up were both 8.2 months. Of the 24 patients that received radiotherapy, 21% had asymmetry due to capsular contracture. A total of 7 patients (29.2%) underwent revisions for either positive margins (12.5%), capsular contracture (8.3%), implant loss (4.2%), or cosmetic concerns (4.2%). One patient developed a pulmonary embolism in the acute postoperative period and was treated with anticoagulant therapy. Conclusion: Oncoplastic augmentation mastopexy is a safe technique with good aesthetic outcomes and acceptable complication rates for ptotic patients with breast cancer and a paucity of breast volume or pre-existing implants who wish to pursue breast-conserving therapy. The revision rates compare favorably with single-stage cosmetic augmentation procedures as well as other oncoplastic techniques described in the literature. The short-term capsular contracture rates seem lower than the rates in patients undergoing radiation after mastectomy and implant-based reconstruction. Long term capsular contractures and revision rates are too early to know in this cohort.Keywords: breast conserving therapy, oncoplastic augmentation mastopexy, capsular contracture, breast reconstruction
Procedia PDF Downloads 1373 The Role of Virtual Reality in Mediating the Vulnerability of Distant Suffering: Distance, Agency, and the Hierarchies of Human Life
Authors: Z. Xu
Abstract:
Immersive virtual reality (VR) has gained momentum in humanitarian communication due to its utopian promises of co-presence, immediacy, and transcendence. These potential benefits have led the United Nations (UN) to tirelessly produce and distribute VR series to evoke global empathy and encourage policymakers, philanthropic business tycoons and citizens around the world to actually do something (i.e. give a donation). However, it is unclear whether or not VR can cultivate cosmopolitans with a sense of social responsibility towards the geographically, socially/culturally and morally mediated misfortune of faraway others. Drawing upon existing works on the mediation of distant suffering, this article constructs an analytical framework to articulate the issue. Applying this framework on a case study of five of the UN’s VR pieces, the article identifies three paradoxes that exist between cyber-utopian and cyber-dystopian narratives. In the “paradox of distance”, VR relies on the notions of “presence” and “storyliving” to implicitly link audiences spatially and temporally to distant suffering, creating global connectivity and reducing perceived distances between audiences and others; yet it also enables audiences to fully occupy the point of view of distant sufferers (creating too close/absolute proximity), which may cause them to feel naive self-righteousness or narcissism with their pleasures and desire, thereby destroying the “proper distance”. In the “paradox of agency”, VR simulates a superficially “real” encounter for visual intimacy, thereby establishing an “audiences–beneficiary” relationship in humanitarian communication; yet in this case the mediated hyperreality is not an authentic reality, and its simulation does not fill the gap between reality and the virtual world. In the “paradox of the hierarchies of human life”, VR enables an audience to experience virtually fundamental “freedom”, epitomizing an attitude of cultural relativism that informs a great deal of contemporary multiculturalism, providing vast possibilities for a more egalitarian representation of distant sufferers; yet it also takes the spectator’s personally empathic feelings as the focus of intervention, rather than structural inequality and political exclusion (an economic and political power relations of viewing). Thus, the audience can potentially remain trapped within the minefield of hegemonic humanitarianism. This study is significant in two respects. First, it advances the turn of digitalization in studies of media and morality in the polymedia milieu; it is motivated by the necessary call for a move beyond traditional technological environments to arrive at a more novel understanding of the asymmetry of power between the safety of spectators and the vulnerability of mediated sufferers. Second, it not only reminds humanitarian journalists and NGOs that they should not rely entirely on the richer news experience or powerful response-ability enabled by VR to gain a “moral bond” with distant sufferers, but also argues that when fully-fledged VR technology is developed, it can serve as a kind of alchemy and should not be underestimated merely as a “bugaboo” of an alarmist philosophical and fictional dystopia.Keywords: audience, cosmopolitan, distant suffering, virtual reality, humanitarian communication
Procedia PDF Downloads 1422 Perspective Shifting in the Elicited Language Production Can Defy with Aging
Authors: Tuyuan Cheng
Abstract:
As we age, many things become more difficult. Among the abilities are the linguistic and cognitive ones. Competing theories have shown that these two functions could diminish together or that one is selectively affected by the other. In other words, some proposes aging affects sentence production in the same way it affects sentence comprehension and other cognitive functions, while some argues it does not.To address this question, the current investigation is conducted into the critical aspect of sentences as well as cognitive abilities – the syntactic complexity and the number of perspective shifts being contained in the elicited production. Healthy non-pathological aging is often characterized by a cognitive and neural decline in a number of cognitive abilities. Although the language is assumed to be of the more stable domain, a variety of findings in the cognitive aging literature would suggest otherwise. Older adults often show deficits in language production and multiple aspects of comprehension. Nevertheless, while some age differences likely reflect cognitive decline, others might reflect changes in communicative goals, and some even display cognitive advantages. In the domain of language processing, research efforts have been made in tests that probed a variety of communicative abilities. In general, there exists a distinction: Comprehension seems to be selectively unaffected, while production does not. The current study raises a novel question and investigates whether aging affects the production of relative clauses (RCs) under the cognitive factor of perspective shifts. Based on Perspective Hypothesis (MacWhinney, 2000, 2005), our cognitive processes build upon a fundamental system of perspective-taking, and language provides a series of cues to facilitate the construction and shifting of perspectives. These cues include a wide variety of constructions, including RCs structures. In this regard, linguistic complexity can be determined by the number of perspective shifts, and the processing difficulties of RCs can be interpreted within the theory of perspective shifting. Two experiments were conducted to study language production under controlled conditions. In Experiment 1, older healthy participants were tested on standard measures of cognitive aging, including MMSE (Mini-Mental State Examination), ToMI-2 (a simplified Theory of Mind Inventory-2), and a perspective-shifting comprehension task programmed with E-Prime. The results were analyzed to examine if/how they are correlated with aging people’s subsequent production data. In Experiment 2, the production profile of differing RCs, SRC vs. ORC, were collected with healthy aging participants who perform a picture elicitation task. Variable containing 0, 1, or 2 perspective shifts were juxtaposed respectively to the pictures and counterbalanced presented for elicitation. In parallel, a controlled group of young adults were recruited to examine the linguistic and cognitive abilities in question. The results lead us to the discussion whetheraging affects RCs production in a manner determined by its semantic structure or the number of perspective shifts it contains or the status of participants’ mental understanding. The major findingsare: (1) Elders’ production on Chinese RCtypes did not display intrinsic difficulty asymmetry. (2) RC types (the linguistic structural features) and the cognitiveperspective shifts jointly play important roles in the elders’ RCproduction. (3) The production of RC may defy the aging in the case offlexibly preserved cognitive ability.Keywords: cognition aging, perspective hypothesis, perspective shift, relative clauses, sentence complexity
Procedia PDF Downloads 1181 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English
Authors: Noury Bakrim
Abstract:
When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization
Procedia PDF Downloads 143