Search results for: automatic speech recognition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2961

Search results for: automatic speech recognition

411 Exploring the Rhinoceros Beetles of a Tropical Forest of Eastern Himalayas

Authors: Subhankar Kumar Sarkar

Abstract:

Beetles of the subfamily Dynastinae under the family Scarabaeidae of the insect order Coleoptera are popularly known as ‘Rhinoceros beetles’ because of the characteristic horn borne by the males on their head. These horns are dedicated in mating battle against other males and have evolved as a result of phenotypic plasticity. Scarabaeidae is the largest of all families under Coleoptera and is composed of 11 subfamilies, of which the subfamily Dynastinae is represented by approximately 300 species. Some of these beetles have been reported to cause considerable damage to agriculture and forestry both in their larval and adult stages, while many of them are beneficial as they pollinate plants and recycle plant materials. Eastern Himalayas is regarded as one of the 35 biodiversity hotspot zones of the world and one of the four of India, which is exhibited by its rich and megadiverse tropical forests. However, our knowledge on the faunal diversity of these forests is very limited, particularly for the insect fauna. One such tropical forest of Eastern Himalayas is the ‘Buxa Tiger Reserve’ located between latitudes 26°30” to 26°55” North and Longitudes 89°20” to 89˚35” East of India and occupies an area of about 759.26 square kilometers. It is with this background an attempt has been made to explore the insect fauna of the forest. Insect sampling was carried out in each beat and range of Buxa Tiger Reserve in all the three seasons viz, Premonsoon, Monsoon, and Postmonsoon. Sample collections were done by sweep nets, hand picking technique and pit fall traps. UV light trap was used to collect the nocturnal insects. Morphological examinations of the collected samples were carried out with Stereozoom Binocular Microscopes (Zeiss SV6 and SV11) and were identified up to species level with the aid of relevant literature. Survey of the insect fauna of the forest resulted in the recognition of 76 scarab species, of which 8 belong to the subfamily dealt herein. Each of the 8 species represents a separate genus. The forest is dominated by the members of Xylotrupes gideon (Linnaeus) as is represented by highest number of individuals. The recorded taxa show about 12% endemism and are of mainly oriental in distribution. Premonsoon is the most favorable season for their occurrence and activity followed by Monsoon and Postmonsoon.

Keywords: Dynastinae, Scarabaeidae, diversity, Buxa Tiger Reserve

Procedia PDF Downloads 162
410 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 113
409 Deploying a Transformative Learning Model in Technological University Dublin to Assess Transversal Skills

Authors: Sandra Thompson, Paul Dervan

Abstract:

Ireland’s first Technological University (TU Dublin) was established on 1st January 2019, and its creation is an exciting new milestone in Irish Higher Education. TU Dublin is now Ireland’s biggest University supporting 29,000 students across three campuses with 3,500 staff. The University aspires to create work-ready graduates who are socially responsible, open-minded global thinkers who are ambitious to change the world for the better. As graduates, they will be enterprising and daring in all their endeavors, ready to play their part in transforming the future. Feedback from Irish employers and students coupled with evidence from other authoritative sources such as the World Economic Forum points to a need for greater focus on the development of students’ employability skills as they prepare for today’s work environment. Moreover, with an increased focus on Universal Design for Learning (UDL) and inclusiveness, there is recognition that students are more than a numeric grade value. Robust grading systems have been developed to track a student’s performance around discipline knowledge but there is little or no global consensus on a definition of transversal skills nor on a unified framework to assess transversal skills. Education and industry sectors are often assessing one or two skills, and some are developing their own frameworks to capture the learner’s achievement in this area. Technological University Dublin (TU Dublin) have discovered and implemented a framework to allow students to develop, assess and record their transversal skills using transformative learning theory. The model implemented is an adaptation of Student Transformative Learning Record - STLR which originated in the University of Central Oklahoma (UCO). The purpose of this paper therefore, is to examine the views of students, staff and employers in the context of deploying a Transformative Learning model within the University to assess transversal skills. It will examine the initial impact the transformative learning model is having socially, personally and on the University as an organization. Crucially also, to identify lessons learned from the deployment in order to assist other Universities and Higher Education Institutes who may be considering a focused adoption of Transformative Learning to meet the challenge of preparing students for today’s work environment.

Keywords: assessing transversal skills, higher education, transformative learning, students

Procedia PDF Downloads 109
408 “Everything, Everywhere, All at Once” Hollywoodization and Lack of Authenticity in Today’s Mainstream Cinema

Authors: Haniyeh Parhizkar

Abstract:

When Sarris came up with the "auteur theory" in 1962, he emphasized that the utmost premise of auteur theory is the inner meanings and concepts of a film and that a film is purely an art form. Today's mainstream movies are conceptually closer to what the Frankfurt School scholars regarded as "reproduced" and "mass culture" years ago. Hollywood goes on to be a huge movie-making machine that leads the dominant paradigms of films throughout the world and cinema is far from art. Although there are still movies, directors, and audiences who favor art cinema over Hollywood and mainstream movies, it's an almost undeniable fact that, for the most part, people's perception of movies is widely influenced by their American depiction and Hollywood's legacy of mass culture. With the uprising of Hollywood studios as the forerunners of the movie industry and cinema being largely dependent on economics rather than artistic values, this distinctive role of cinema has diminished and is replaced with a global standard. The Blockbuster 2022 film, 'Everything, Everywhere, All at Once' is now the most-awarded movie of all time, winning seven Oscars at the 95th Academy Awards. Despite its main cast being Asian, the movie is produced by American incorporation and is heavily influenced by Hollywood's dominant themes of superheroes, fantasy, action, and adventure. The New Yorker film critic, Richard Brody, called the movie "a pitch for a Marvel" and critiqued the film for being "universalized" and "empty of history and culture". Other critics of Variety pinpointed the movie's similarities to Marvel, particularly in their storylines of multi-universe which manifest traces of American legacy. As argued by these critics, 'Everything, Everywhere, All at Once' might appear as a unique and authentic film at first glance, but it can be argued that it is yet another version of a Marvel movie. While the movie's universal acclaim was regarded as recognition and an acknowledgment of its Asian cast, the issue that arises here is when the Hollywood influences and American themes are so robust in the film, is the movie industry honoring another culture or is it yet another celebration of Hollywood's dominant paradigm. This essay will employ a critical approach to Hollywood's dominance and mass-produced culture, which has deprived authenticity of non-American movies and is constantly reproducing the same formula of success.

Keywords: hollywoodization, universalization, blockbuster, dominant paradigm, marvel, authenticity, diversity

Procedia PDF Downloads 60
407 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 87
406 DNA Nano Wires: A Charge Transfer Approach

Authors: S. Behnia, S. Fathizadeh, A. Akhshani

Abstract:

In the recent decades, DNA has increasingly interested in the potential technological applications that not directly related to the coding for functional proteins that is the expressed in form of genetic information. One of the most interesting applications of DNA is related to the construction of nanostructures of high complexity, design of functional nanostructures in nanoelectronical devices, nanosensors and nanocercuits. In this field, DNA is of fundamental interest to the development of DNA-based molecular technologies, as it possesses ideal structural and molecular recognition properties for use in self-assembling nanodevices with a definite molecular architecture. Also, the robust, one-dimensional flexible structure of DNA can be used to design electronic devices, serving as a wire, transistor switch, or rectifier depending on its electronic properties. In order to understand the mechanism of the charge transport along DNA sequences, numerous studies have been carried out. In this regard, conductivity properties of DNA molecule could be investigated in a simple, but chemically specific approach that is intimately related to the Su-Schrieffer-Heeger (SSH) model. In SSH model, the non-diagonal matrix element dependence on intersite displacements is considered. In this approach, the coupling between the charge and lattice deformation is along the helix. This model is a tight-binding linear nanoscale chain established to describe conductivity phenomena in doped polyethylene. It is based on the assumption of a classical harmonic interaction between sites, which is linearly coupled to a tight-binding Hamiltonian. In this work, the Hamiltonian and corresponding motion equations are nonlinear and have high sensitivity to initial conditions. Then, we have tried to move toward the nonlinear dynamics and phase space analysis. Nonlinear dynamics and chaos theory, regardless of any approximation, could open new horizons to understand the conductivity mechanism in DNA. For a detailed study, we have tried to study the current flowing in DNA and investigated the characteristic I-V diagram. As a result, It is shown that there are the (quasi-) ohmic areas in I-V diagram. On the other hand, the regions with a negative differential resistance (NDR) are detectable in diagram.

Keywords: DNA conductivity, Landauer resistance, negative di erential resistance, Chaos theory, mean Lyapunov exponent

Procedia PDF Downloads 399
405 Bank Liquidity Creation in a Dual Banking System: An Empirical Investigation

Authors: Lianne M. Q. Lee, Mohammed Sharaf Shaiban

Abstract:

The importance of bank liquidity management took center stage as policy makers promoted a more resilient global banking system after the market turmoil of 2007. The growing recognition of Islamic banks’ function of intermediating funds in the economy warrants the need to investigate its balance sheet structure which is distinct from its conventional counterparts. Given that asymmetric risk, transformation is inevitable; Islamic banks need to identify the liquidity risk within their distinctive balance sheet structure. Thus, there is a strong need to quantify and assess the liquidity position to ensure proper functioning of a financial institution. It is vital to measure bank liquidity because liquid banks face less liquidity risk. We examine this issue by using two alternative quantitative measures of liquidity creation “cat fat” and “cat nonfat” constructed by Berger and Bouwman (2009). “Cat fat” measures all on balance sheet items including off balance sheet, whilst the latter measures only on balance sheet items. Liquidity creation is measured over the period 2007-2014 in 14 countries where Islamic and conventional commercial banks coexist. Also, separately by bank size class as empirical studies have shown that liquidity creation varies by bank size. An interesting and important finding shows that all size class of Islamic banks, on average have increased creation of aggregate liquidity in real dollar terms over the years for both liquidity creation measures especially for large banks indicating that Islamic banks actually generates more liquidity to the economy compared to its conventional counterparts, including from off-balance sheet items. The liquidity creation for off-balance sheets by conventional banks may have been affected by the global financial crisis when derivatives markets were severely hit. The results also suggest that Islamic banks have the higher volume of assets and deposits and that borrowing/issues of bonds are less in Islamic banks compared to conventional banks because most products are interest-based. As Islamic banks appear to create more liquidity than conventional banks under both measures, it translates that the development of Islamic banking is significant over the decades since its inception. This finding is encouraging as, despite Islamic banking’s overall size, it represents growth opportunities for these countries.

Keywords: financial institution, liquidity creation, liquidity risk, policy and regulation

Procedia PDF Downloads 319
404 An Automated Magnetic Dispersive Solid-Phase Extraction Method for Detection of Cocaine in Human Urine

Authors: Feiyu Yang, Chunfang Ni, Rong Wang, Yun Zou, Wenbin Liu, Chenggong Zhang, Fenjin Sun, Chun Wang

Abstract:

Cocaine is the most frequently used illegal drug globally, with the global annual prevalence of cocaine used ranging from 0.3% to 0.4 % of the adult population aged 15–64 years. Growing consumption trend of abused cocaine and drug crimes are a great concern, therefore urine sample testing has become an important noninvasive sampling whereas cocaine and its metabolites (COCs) are usually present in high concentrations and relatively long detection windows. However, direct analysis of urine samples is not feasible because urine complex medium often causes low sensitivity and selectivity of the determination. On the other hand, presence of low doses of analytes in urine makes an extraction and pretreatment step important before determination. Especially, in gathered taking drug cases, the pretreatment step becomes more tedious and time-consuming. So developing a sensitive, rapid and high-throughput method for detection of COCs in human body is indispensable for law enforcement officers, treatment specialists and health officials. In this work, a new automated magnetic dispersive solid-phase extraction (MDSPE) sampling method followed by high performance liquid chromatography-mass spectrometry (HPLC-MS) was developed for quantitative enrichment of COCs from human urine, using prepared magnetic nanoparticles as absorbants. The nanoparticles were prepared by silanizing magnetic Fe3O4 nanoparticles and modifying them with divinyl benzene and vinyl pyrrolidone, which possesses the ability for specific adsorption of COCs. And this kind of magnetic particle facilitated the pretreatment steps by electromagnetically controlled extraction to achieve full automation. The proposed device significantly improved the sampling preparation efficiency with 32 samples in one batch within 40mins. Optimization of the preparation procedure for the magnetic nanoparticles was explored and the performances of magnetic nanoparticles were characterized by scanning electron microscopy, vibrating sample magnetometer and infrared spectra measurements. Several analytical experimental parameters were studied, including amount of particles, adsorption time, elution solvent, extraction and desorption kinetics, and the verification of the proposed method was accomplished. The limits of detection for the cocaine and cocaine metabolites were 0.09-1.1 ng·mL-1 with recoveries ranging from 75.1 to 105.7%. Compared to traditional sampling method, this method is time-saving and environmentally friendly. It was confirmed that the proposed automated method was a kind of highly effective way for the trace cocaine and cocaine metabolites analyses in human urine.

Keywords: automatic magnetic dispersive solid-phase extraction, cocaine detection, magnetic nanoparticles, urine sample testing

Procedia PDF Downloads 178
403 Comparison of Two Home Sleep Monitors Designed for Self-Use

Authors: Emily Wood, James K. Westphal, Itamar Lerner

Abstract:

Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.

Keywords: DREEM, EEG, seep monitoring, Z-machine

Procedia PDF Downloads 82
402 Linguistic Competencies of Students with Hearing Impairment

Authors: Munawar Malik, Muntaha Ahmad, Khalil Ullah Khan

Abstract:

Linguistic abilities in students with hearing impairment yet remain a concern for educationists. The emerging technological support and provisions in recent era vows to have addressed the situation and claims significant contribution in terms of linguistic repertoire. Being a descriptive and quantitative paradigm of study, the purpose of this research set forth was to assess linguistic competencies of students with hearing impairment in English language. The goals were further broken down to identify level of reading abilities in the subject population. The population involved students with HI studying at higher secondary level in Lahore. Simple random sampling technique was used to choose a sample of fifty students. A purposive curriculum-based assessment was designed in line with accelerated learning program by Punjab Government, to assess Linguistic competence among the sample. Further to it, an Informal Reading Inventory (IRI) corresponding to reading levels was also developed by researchers duly validated and piloted before the final use. Descriptive and inferential statistics were utilized to reach to the findings. Spearman’s correlation was used to find out relationship between degree of hearing loss, grade level, gender and type of amplification device. Independent sample t-test was used to compare means among groups. Major findings of the study revealed that students with hearing impairment exhibit significant deviation from the mean scores when compared in terms of grades, severity and amplification device. The study divulged that respective students with HI have yet failed to qualify an independent level of reading according to their grades as majority falls at frustration level of word recognition and passage comprehension. The poorer performance can be attributed to lower linguistic competence as it shows in the frustration levels of reading, writing and comprehension. The correlation analysis did reflect an improved performance grade wise, however scores could only correspond to frustration level and independent levels was never achieved. Reported achievements at instructional level of subject population may further to linguistic skills if practiced purposively.

Keywords: linguistic competence, hearing impairment, reading levels, educationist

Procedia PDF Downloads 31
401 Gestalt in Music and Brain: A Non-Linear Chaos Based Study with Detrended/Adaptive Fractal Analysis

Authors: Shankha Sanyal, Archi Banerjee, Sayan Biswas, Sourya Sengupta, Sayan Nag, Ranjan Sengupta, Dipak Ghosh

Abstract:

The term ‘gestalt’ has been widely used in the field of psychology which defined the perception of human mind to group any object not in part but as a 'unified' whole. Music, in general, is polyphonic - i.e. a combination of a number of pure tones (frequencies) mixed together in a manner that sounds harmonious. The study of human brain response due to different frequency groups of the acoustic signal can give us an excellent insight regarding the neural and functional architecture of brain functions. Hence, the study of music cognition using neuro-biosensors is becoming a rapidly emerging field of research. In this work, we have tried to analyze the effect of different frequency bands of music on the various frequency rhythms of human brain obtained from EEG data. Four widely popular Rabindrasangeet clips were subjected to Wavelet Transform method for extracting five resonant frequency bands from the original music signal. These frequency bands were initially analyzed with Detrended/Adaptive Fractal analysis (DFA/AFA) methods. A listening test was conducted on a pool of 100 respondents to assess the frequency band in which the music becomes non-recognizable. Next, these resonant frequency bands were presented to 20 subjects as auditory stimulus and EEG signals recorded simultaneously in 19 different locations of the brain. The recorded EEG signals were noise cleaned and subjected again to DFA/AFA technique on the alpha, theta and gamma frequency range. Thus, we obtained the scaling exponents from the two methods in alpha, theta and gamma EEG rhythms corresponding to different frequency bands of music. From the analysis of music signal, it is seen that loss of recognition is proportional to the loss of long range correlation in the signal. From the EEG signal analysis, we obtain frequency specific arousal based response in different lobes of brain as well as in specific EEG bands corresponding to musical stimuli. In this way, we look to identify a specific frequency band beyond which the music becomes non-recognizable and below which in spite of the absence of other bands the music is perceivable to the audience. This revelation can be of immense importance when it comes to the field of cognitive music therapy and researchers of creativity.

Keywords: AFA, DFA, EEG, gestalt in music, Hurst exponent

Procedia PDF Downloads 303
400 Task Based Functional Connectivity within Reward Network in Food Image Viewing Paradigm Using Functional MRI

Authors: Preetham Shankapal, Jill King, Kori Murray, Corby Martin, Paula Giselman, Jason Hicks, Owen Carmicheal

Abstract:

Activation of reward and satiety networks in the brain while processing palatable food cues, as well as functional connectivity during rest has been studied using functional Magnetic Resonance Imaging of the brain in various obesity phenotypes. However, functional connectivity within the reward and satiety network during food cue processing is understudied. 14 obese individuals underwent two fMRI scans during viewing of Macronutrient Picture System images. Each scan included two blocks of images of High Sugar/High Fat (HSHF), High Carbohydrate/High Fat (HCHF), Low Sugar/Low Fat (LSLF) and also non-food images. Seed voxels within seven food reward relevant ROIs: Insula, putamen and cingulate, precentral, parahippocampal, medial frontal and superior temporal gyri were isolated based on a prior meta-analysis. Beta series correlation for task-related functional connectivity between these seed voxels and the rest of the brain was computed. Voxel-level differences in functional connectivity were calculated between: first and the second scan; individuals who saw novel (N=7) vs. Repeated (N=7) images in the second scan; and between the HC/HF, HSHF blocks vs LSLF and non-food blocks. Computations and analysis showed that during food image viewing, reward network ROIs showed significant functional connectivity with each other and with other regions responsible for attentional and motor control, including inferior parietal lobe and precentral gyrus. These functional connectivity values were heightened among individuals who viewed novel HS/HF images in the second scan. In the second scan session, functional connectivity was reduced within the reward network but increased within attention, memory and recognition regions, suggesting habituation to reward properties and increased recollection of previously viewed images. In conclusion it can be inferred that Functional Connectivity within reward network and between reward and other brain regions, varies by important experimental conditions during food photography viewing, including habituation to shown foods.

Keywords: fMRI, functional connectivity, task-based, beta series correlation

Procedia PDF Downloads 242
399 Interior Architecture in the Anthropocene: Engaging the Subnature through the Intensification of Body-Surface Interaction

Authors: Verarisa Ujung

Abstract:

The Anthropocene – as scientists define as a new geological epoch where human intervention has the dominant influence on the geological, atmospheric, and ecological processes challenges the contemporary discourse in architecture and interior. The dominant influence characterises the incapability to distinguish the notion of nature, subnature, human and non-human. Consequently, living in the Anthropocene demands sensitivity and responsiveness to heighten our sense of the rhythm of transformation and recognition of our environment as a product of natural, social and historical processes. The notion of subnature is particularly emphasised in this paper to investigate the poetic sense of living with subnature. It could be associated with the critical tool for exploring the aesthetic and programmatic implications of subnature on interiority. The ephemeral immaterial attached to subnature promotes the sense of atmospheric delineation of interiority, the very inner significance of body-surface interaction, which central to interior architecture discourse. This would then reflect human’s activities; examine the transformative change, the architectural motion and the traces that left between moments. In this way, engaging the notion of subnature enable us to better understand the critical subject on interiority and might provide an in-depth study on interior architecture. Incorporating the exploration on the form, materiality, and pattern of subnature, this research seeks to grasp the inner significance of micro to macro approaches so that the future of interior might be compelled to depend more on the investigation and development of responsive environment. To reflect upon the form, materiality and intensity of subnature that specifically characterized by the natural, social and historical processes, this research examines a volcanic land, White Island/Whakaari, New Zealand as the chosen site of investigation. Emitting various forms and intensities of subnatures - smokes, mud, sulphur gas, this volcanic land also open to the new inhabitation within the sulphur factory ruins that reflects human’s past occupation. In this way, temporal and natural selected manifestations of materiality, artefact, and performance can be traced out and might reveal the meaningful relations among space, inhabitation, and well-being of inhabitants in the Anthropocene.

Keywords: anthropocene, body, intensification, intensity, interior architecture, subnature, surface

Procedia PDF Downloads 151
398 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 91
397 SLAPP Suits: An Encroachment On Human Rights Of A Global Proportion And What Can Be Done About It

Authors: Laura Lee Prather

Abstract:

A functioning democracy is defined by various characteristics, including freedom of speech, equality, human rights, rule of law and many more. Lawsuits brought to intimidate speakers, drain the resources of community members, and silence journalists and others who speak out in support of matters of public concern are an abuse of the legal system and an encroachment of human rights. The impact can have a broad chilling effect, deterring others from speaking out against abuse. This article aims to suggest ways to address this form of judicial harassment. In 1988, University of Denver professors George Pring and Penelope Canan coined the term “SLAPP” when they brought to light a troubling trend of people getting sued for speaking out about matters of public concern. Their research demonstrated that thousands of people engaging in public debate and citizen involvement in government have been and will be the targets of multi-million-dollar lawsuits for the purpose of silencing them and dissuading others from speaking out in the future. SLAPP actions chill information and harm the public at large. Professors Pring and Canan catalogued a tsunami of SLAPP suits filed by public officials, real estate developers and businessmen against environmentalists, consumers, women’s rights advocates and more. SLAPPs are now seen in every region of the world as a means to intimidate people into silence and are viewed as a global affront to human rights. Anti-SLAPP laws are the antidote to SLAPP suits and while commonplace in the United States are only recently being considered in the EU and the UK. This researcher studied more than thirty years of Anti-SLAPP legislative policy in the U.S., the call for evidence and resultant EU Commission’s Anti-SLAPP Directive and Member States Recommendations, the call for evidence by the UK Ministry of Justice, response and Model Anti-SLAPP law presented to UK Parliament, as well as, conducted dozens of interviews with NGO’s throughout the EU, UK, and US to identify varying approaches to SLAPP lawsuits, public policy, and support for SLAPP victims. This paper identifies best practices taken from the US, EU and UK that can be implemented globally to help combat SLAPPs by: (1) raising awareness about SLAPPs, how to identify them, and recognizing habitual abusers of the court system; (2) engaging governments in the policy discussion in combatting SLAPPs and supporting SLAPP victims; (3) educating judges in recognizing SLAPPs an general training on encroachment of human rights; (4) and holding lawyers accountable for ravaging the rule of law.

Keywords: Anti-SLAPP Laws and Policy, Comparative media law and policy, EU Anti-SLAPP Directive and Member Recommendations, International Human Rights of Freedom of Expression

Procedia PDF Downloads 53
396 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 86
395 Preliminary Study of Gold Nanostars/Enhanced Filter for Keratitis Microorganism Raman Fingerprint Analysis

Authors: Chi-Chang Lin, Jian-Rong Wu, Jiun-Yan Chiu

Abstract:

Myopia, ubiquitous symptom that is necessary to correct the eyesight by optical lens struggles many people for their daily life. Recent years, younger people raise interesting on using contact lens because of its convenience and aesthetics. In clinical, the risk of eye infections increases owing to the behavior of incorrectly using contact lens unsupervised cleaning which raising the infection risk of cornea, named ocular keratitis. In order to overcome the identification needs, new detection or analysis method with rapid and more accurate identification for clinical microorganism is importantly needed. In our study, we take advantage of Raman spectroscopy having unique fingerprint for different functional groups as the distinct and fast examination tool on microorganism. As we know, Raman scatting signals are normally too weak for the detection, especially in biological field. Here, we applied special SERS enhancement substrates to generate higher Raman signals. SERS filter we designed in this article that prepared by deposition of silver nanoparticles directly onto cellulose filter surface and suspension nanoparticles - gold nanostars (AuNSs) also be introduced together to achieve better enhancement for lower concentration analyte (i.e., various bacteria). Research targets also focusing on studying the shape effect of synthetic AuNSs, needle-like surface morphology may possible creates more hot-spot for getting higher SERS enhance ability. We utilized new designed SERS technology to distinguish the bacteria from ocular keratitis under strain level, and specific Raman and SERS fingerprint were grouped under pattern recognition process. We reported a new method combined different SERS substrates can be applied for clinical microorganism detection under strain level with simple, rapid preparation and low cost. Our presenting SERS technology not only shows the great potential for clinical bacteria detection but also can be used for environmental pollution and food safety analysis.

Keywords: bacteria, gold nanostars, Raman spectroscopy surface-enhanced Raman scattering filter

Procedia PDF Downloads 141
394 Glyco-Biosensing as a Novel Tool for Prostate Cancer Early-Stage Diagnosis

Authors: Pavel Damborsky, Martina Zamorova, Jaroslav Katrlik

Abstract:

Prostate cancer is annually the most common newly diagnosed cancer among men. An extensive number of evidence suggests that traditional serum Prostate-specific antigen (PSA) assay still suffers from a lack of sufficient specificity and sensitivity resulting in vast over-diagnosis and overtreatment. Thus, the early-stage detection of prostate cancer (PCa) plays undisputedly a critical role for successful treatment and improved quality of life. Over the last decade, particular altered glycans have been described that are associated with a range of chronic diseases, including cancer and inflammation. These glycans differences enable a distinction to be made between physiological and pathological state and suggest a valuable biosensing tool for diagnosis and follow-up purposes. Aberrant glycosylation is one of the major characteristics of disease progression. Consequently, the aim of this study was to develop a more reliable tool for early-stage PCa diagnosis employing lectins as glyco-recognition elements. Biosensor and biochip technology putting to use lectin-based glyco-profiling is one of the most promising strategies aimed at providing fast and efficient analysis of glycoproteins. The proof-of-concept experiments based on sandwich assay employing anti-PSA antibody and an aptamer as a capture molecules followed by lectin glycoprofiling were performed. We present a lectin-based biosensing assay for glycoprofiling of serum biomarker PSA using different biosensor and biochip platforms such as label-free surface plasmon resonance (SPR) and microarray with fluorescent label. The results suggest significant differences in interaction of particular lectins with PSA. The antibody-based assay is frequently associated with the sensitivity, reproducibility, and cross-reactivity issues. Aptamers provide remarkable advantages over antibodies due to the nucleic acid origin, stability and no glycosylation. All these data are further step for construction of highly selective, sensitive and reliable sensors for early-stage diagnosis. The experimental set-up also holds promise for the development of comparable assays with other glycosylated disease biomarkers.

Keywords: biomarker, glycosylation, lectin, prostate cancer

Procedia PDF Downloads 383
393 On the Development of Evidential Contrasts in the Greater Himalayan Region

Authors: Marius Zemp

Abstract:

Evidentials indicate how the speaker obtained the information conveyed in a statement. Detailed diachronic-functional accounts of evidential contrasts found in the Greater Himalayan Region (GHR) reveal that contrasting evidentials are not only defined against each other but also that most of them once had different aspecto-temporal (TA) values which must have aligned when their contrast was conventionalized. Based on these accounts, the present paper sheds light on hitherto unidentified mechanisms of grammatical change. The main insights of the present study were facilitated by ‘functional reconstruction’, which (i) revolves around morphemes which appear to be used in divergent ways within a language and/or across different related languages, (ii) persistently devises hypotheses as to how these functional divergences may have developed, and (iii) retains those hypotheses which most plausibly and economically account for the data. Based on the dense and detailed grammatical literature on the Tibetic language family, the author of this study is able to reconstruct the initial steps by which its evidentiality systems developed: By the time Proto-Tibetan started to be spread across much of Central Asia in the 7th century CE, verbal concatenations with and without a connective -s had become common. As typical for resultative constructions around the globe, Proto-Tibetan *V-s-’dug ‘was there, having undergone V’ (employing the simple past of ’dug ‘stay, be there’) allowed both for a perfect reading (‘the state resulting from V holds at the moment of speech’) and an inferential reading (‘(I infer from its result that) V has taken place’). In Western Tibetic, *V-s-’dug grammaticalized in its perfect meaning as it became contrasted with perfect *V-s-yod ‘is there, having undergone V’ (employing the existential copula yod); that is, *V-s-’dug came to mean that the speaker directly witnessed the profiled result of V, whereas *V-s-yod came to mean that the speaker does not depend on direct evidence of the result, as s/he simply knows that it holds. In Eastern Tibetic, on the other hand, V-s-’dug grammaticalized in its inferential past meaning as it became contrasted with past *V-thal ‘went past V-ing’ (employing the simple past of thal ‘go past’); that is, *V-s-’dug came to mean that the profiled past event was inferred from its result, while *V-thal came to mean that it was directly witnessed. Hence, depending on whether it became contrasted with a perfect or a past construction, resultative V-s-’dug grammaticalized either its direct evidential perfect or its inferential past function. This means that in both cases, evidential readings of constructions with distinct but overlapping TA-values became contrasted, and in order for their contrasting meanings to grammaticalize, the constructions had to agree on their tertium comparationis, which was their shared TA-value. By showing that other types of evidential contrasts in the GHR are also TA-aligned, while no single markers (or privative contrasts) are found to have grammaticalized evidential functions, the present study suggests that, at least in this region of the world, evidential meanings grammaticalize only in equipollent contrasts, which always end up TA-aligned.

Keywords: evidential contrasts, functional-diachronic accounts, grammatical change, himalayan languages, tense/aspect-alignment

Procedia PDF Downloads 101
392 The Comparative Study of Attitudes toward Entrepreneurial Intention between ASEAN and Europe: An Analysis Using GEM Data

Authors: Suchart Tripopsakul

Abstract:

This paper uses data from the Global Entrepreneurship Monitor (GEM) to investigate the difference of attitudes towards entrepreneurial intention (EI). EI is generally assumed to be the single most relevant predictor of entrepreneurial behavior. The aim of this paper is to examine a range of attitudes effect on individual’s intent to start a new venture. A cross-cultural comparison between Asia and Europe is used to further investigate the possible differences between potential entrepreneurs from these distinct national contexts. The empirical analysis includes a GEM data set of 10 countries (n = 10,306) which was collected in 2013. Logistic regression is used to investigate the effect of individual’s attitudes on EI. Independent variables include individual’s perceived capabilities, the ability to recognize business opportunities, entrepreneurial network, risk perceptions as well as a range of socio-cultural attitudes. Moreover, a cross-cultural comparison of the model is conducted including six ASEAN (Malaysia, Indonesia, Philippines, Singapore, Vietnam and Thailand) and four European nations (Spain, Sweden, Germany, and the United Kingdom). The findings support the relationship between individual’s attitudes and their entrepreneurial intention. Individual’s capability, opportunity recognition, networks and a range of socio-cultural perceptions all influence EI significantly. The impact of media attention on entrepreneurship and was found to influence EI in ASEAN, but not in Europe. On the one hand, Fear of failure was found to influence EI in Europe, but not in ASEAN. The paper develops and empirically tests attitudes toward Entrepreneurial Intention between ASEAN and Europe. Interestingly, fear of failure was found to have no significant effect in ASEAN, and the impact of media attention on entrepreneurship and was found to influence EI in ASEAN. Moreover, the resistance of ASEAN entrepreneurs to the otherwise high rates of fear of failure and high impact of media attention are proposed as independent variables to explain the relatively high rates of entrepreneurial activity in ASEAN as reported by GEM. The paper utilizes a representative sample of 10,306 individuals in 10 countries. A range of attitudes was found to significantly influence entrepreneurial intention. Many of these perceptions, such as the impact of media attention on entrepreneurship can be manipulated by government policy. The paper also suggests strategies by which Asian economy in particular can benefit from their apparent high impact of media attention on entrepreneurship.

Keywords: an entrepreneurial intention, attitude, GEM, ASEAN and Europe

Procedia PDF Downloads 278
391 Human Rights in the United States: Challenges and Lessons from the Period 1948-2018

Authors: Mary Carmen Peloche Barrera

Abstract:

Since its early years as an independent nation, the United States has been one of the main promoters regarding the recognition, legislation, and protection of human rights. In the matter of freedom, the founding father Thomas Jefferson envisioned the role of the U.S. as a defender of freedom and equality throughout the world. This founding ideal shaped America’s domestic and foreign policy in the 19th and the 20th century and became an aspiration of the ideals of the country to expand its values and institutions. The history of the emergence of human rights cannot be studied without making reference to leaders such as Woodrow Wilson, Franklin, and Eleanor Roosevelt, as well as Martin Luther King. Throughout its history, this country has proclaimed that the protection of the freedoms of men, both inside and outside its borders, is practically the reason for its existence. Although the United States was one of the first countries to recognize the existence of inalienable rights for individuals, as well as the main promoter of the Universal Declaration of Human Rights of 1948, the country has gone through critical moments that had led to questioning its commitment to the issue. Racial segregation, international military interventions, national security strategy, as well as national legislation on immigration, are some of the most controversial issues related to decisions and actions driven by the United States, which at the same time mismatched with its role as an advocate of human rights, both in the Americas and in the rest of the world. The aim of this paper is to study the swinging of the efforts and commitments of the United States towards human rights. The paper will analyze the history and evolution of human rights in the United States, to study the greatest challenges for the country in this matter. The paper will focus on both the domestic policy (related to demographic issues) and foreign policy (about its role in a post-war world). Currently, more countries are joining the multilateral efforts for the promotion and protection of human rights. At the same time, the United States is one of the least committed countries in this respect, having ratified only 5 of the 18 treaties emanating from the United Nations. The last ratification was carried out in 2002 and, since then, the country has been losing ground, in an increasingly vertiginous way, in its credibility and, even worse, in its role as leader of 'the free world'. With or without the United States, the protection of human rights should remain the main goal of the international community.

Keywords: United States, human rights, foreign policy, domestic policy

Procedia PDF Downloads 98
390 Current Deflecting Wall: A Promising Structure for Minimising Siltation in Semi-Enclosed Docks

Authors: A. A. Purohit, A. Basu, K. A. Chavan, M. D. Kudale

Abstract:

Many estuarine harbours in the world are facing the problem of siltation in docks, channel entrances, etc. The harbours in India are not an exception and require maintenance dredging to achieve navigable depths for keeping them operable. Hence, dredging is inevitable and is a costly affair. The heavy siltation in docks in well mixed tide dominated estuaries is mainly due to settlement of cohesive sediments in suspension. As such there is a need to have a permanent solution for minimising the siltation in such docks to alter the hydrodynamic flow field responsible for siltation by constructing structures outside the dock. One of such docks on the west coast of India, wherein siltation of about 2.5-3 m/annum prevails, was considered to understand the hydrodynamic flow field responsible for siltation. The dock is situated in such a region where macro type of semi-diurnal tide (range of about 5m) prevails. In order to change the flow field responsible for siltation inside the dock, suitability of Current Deflecting Wall (CDW) outside the dock was studied, which will minimise the sediment exchange rate and siltation in the dock. The well calibrated physical tidal model was used to understand the flow field during various phases of tide for the existing dock in Mumbai harbour. At the harbour entrance where the tidal flux exchanges in/out of the dock, measurements on water level and current were made to estimate the sediment transport capacity. The distorted scaled model (1:400 (H) & 1:80 (V)) of Mumbai area was used to study the tidal flow phenomenon, wherein tides are generated by automatic tide generator. Hydraulic model studies carried out under the existing condition (without CDW) reveal that, during initial hours of flood tide, flow hugs the docks breakwater and part of flow which enters the dock forms number of eddies of varying sizes inside the basin, while remaining part of flow bypasses the entrance of dock. During ebb, flow direction reverses, and part of the flow re-enters the dock from outside and creates eddies at its entrance. These eddies do not allow water/sediment-mass to come out and result in settlement of sediments in dock both due to eddies and more retention of sediment. At latter hours, current strength outside the dock entrance reduces and allows the water-mass of dock to come out. In order to improve flow field inside the dockyard, two CDWs of length 300 m and 40 m were proposed outside the dock breakwater and inline to Pier-wall at dock entrance. Model studies reveal that, during flood, major flow gets deflected away from the entrance and no eddies are formed inside the dock, while during ebb flow does not re-enter the dock, and sediment flux immediately starts emptying it during initial hours of ebb. This reduces not only the entry of sediment in dock by about 40% but also the deposition by about 42% due to less retention. Thus, CDW is a promising solution to significantly reduce siltation in dock.

Keywords: current deflecting wall, eddies, hydraulic model, macro tide, siltation

Procedia PDF Downloads 273
389 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 110
388 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)

Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim

Abstract:

Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).

Keywords: pineapple, diagnosis, virus, NDVI

Procedia PDF Downloads 766
387 Construction and Analysis of Tamazight (Berber) Text Corpus

Authors: Zayd Khayi

Abstract:

This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.

Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods

Procedia PDF Downloads 39
386 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 305
385 Female Autism Spectrum Disorder and Understanding Rigid Repetitive Behaviors

Authors: Erin Micali, Katerina Tolstikova, Cheryl Maykel, Elizabeth Harwood

Abstract:

Female ASD is seldomly studied separately from males. Further, females with ASD are disproportionately underrepresented in the research at a rate of 3:1 (male to female). As such, much of the current understanding about female rigid repetitive behaviors (RRBs) stems from research’s understanding of male RRBs. This can be detrimental to understanding female ASD because this understanding of female RRBs largely discounts female camouflaging and the possibility that females present their autistic symptoms differently. Current literature suggests that females with ASD engage in fewer RRBs than males with ASD and when females do engage in RRBs, they are likely to engage in more subtle, less overt obsessions and repetitive behaviors than males. Method: The current study utilized a mixed methods research design to identify the type and frequency of RRBs that females with ASD engaged in by using a cross-sectional design. The researcher recruited only females to be part of the present study with the criteria they be at least age six and not have co-occurring cognitive impairment. Results: The researcher collected previous testing data (Autism Diagnostic Interview-Revised (ADI-R), Child or Adolescent/Adult Sensory Profile-2, Autism/ Empathy Quotient, Yale Brown Obsessive Compulsive Checklist, Rigid Repetitive Behavior Checklist (evaluator created list), and demographic questionnaire) from 25 total participants. The participants ages ranged from six to 52. The participants were 96% Caucasion and 4% Latin American. Qualitative analysis found the current participant pool engaged in six RRB themes including repetitive behaviors, socially restrictive behaviors, repetitive speech, difficulty with transition, obsessive behaviors, and restricted interests. The current dataset engaged in socially restrictive behaviors and restrictive interests most frequently. Within the main themes 40 subthemes were isolated, defined, and analyzed. Further, preliminary quantitative analysis was run to determine if age impacted camouflaging behaviors and overall presentation of RRBs. Within this dataset this was not founded. Further qualitative data will be run to determine if this dataset engaged in more overt or subtle RRBs to confirm or rebuff previous research. The researcher intends to run SPSS analysis to determine if there was statistical difference between each RRB theme and overall presentation. Secondly, each participant will be analyzed for presentation of RRB, age, and previous diagnoses. Conclusion: The present study aimed to assist in diagnostic clarity. This was achieved by collecting data from a female only participant pool across the lifespan. Current data aided in clarity of the type of RRBs engage in. A limited sample size was a barrier in this study.

Keywords: autism spectrum disorder, camouflaging, rigid repetitive behaviors, gender disparity

Procedia PDF Downloads 111
384 The Role of Accounting and Auditing in Anti-Corruption Strategies: The Case of ECOWAS

Authors: Edna Gnomblerou

Abstract:

Given the current scale of corruption epidemic in West African economies, governments are seeking for immediate and effective measures to reduce the likelihood of the plague within the region. Generally, accountants and auditors are expected to help organizations in detecting illegal practices. However, their role in the fight against corruption is sometimes limited due to the collusive nature of corruption. The Denmark anti-corruption model shows that the implementation of additional controls over public accounts and independent efficient audits improve transparency and increase the probability of detection. This study is aimed at reviewing the existing anti-corruption policies of the Economic Commission of West African States (ECOWAS) as to observe the role attributed to accounting, auditing and other managerial practices in their anti-corruption drive. It further discusses the usefulness of accounting and auditing in helping anti-corruption commissions in controlling misconduct and increasing the perception to detect irregularities within public administration. The purpose of this initiative is to identify and assess the relevance of accounting and auditing in curbing corruption. To meet this purpose, the study was designed to answer the questions of whether accounting and auditing processes were included in the reviewed anti-corruption strategies, and if yes, whether they were effective in the detection process. A descriptive research method was adopted in examining the role of accounting and auditing in West African anti-corruption strategies. The analysis reveals that proper recognition of accounting standards and implementation of financial audits are viewed as strategic mechanisms in tackling corruption. Additionally, codes of conduct, whistle-blowing and information disclosure to the public are among the most common managerial practices used throughout anti-corruption policies to effectively and efficiently address the problem. These observations imply that sound anti-corruption strategies cannot ignore the values of including accounting and auditing processes. On one hand, this suggests that governments should employ all resources possible to improve accounting and auditing practices in the management of public sector organizations. On the other hand, governments must ensure that accounting and auditing practices are not limited to the private sector, but when properly implemented constitute crucial mechanisms to control and reduce corrupt incentives in public sector.

Keywords: accounting, anti-corruption strategy, auditing, ECOWAS

Procedia PDF Downloads 231
383 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 76
382 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany

Authors: Michael Mederle, Heinz Bernhardt

Abstract:

The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.

Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing

Procedia PDF Downloads 207