Search results for: auditory error recognition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3597

Search results for: auditory error recognition

897 Query in Grammatical Forms and Corpus Error Analysis

Authors: Katerina Florou

Abstract:

Two decades after coined the term "learner corpora" as collections of texts created by foreign or second language learners across various language contexts, and some years following suggestion to incorporate "focusing on form" within a Task-Based Learning framework, this study aims to explore how learner corpora, whether annotated with errors or not, can facilitate a focus on form in an educational setting. Argues that analyzing linguistic form serves the purpose of enabling students to delve into language and gain an understanding of different facets of the foreign language. This same objective is applicable when analyzing learner corpora marked with errors or in their raw state, but in this scenario, the emphasis lies on identifying incorrect forms. Teachers should aim to address errors or gaps in the students' second language knowledge while they engage in a task. Building on this recommendation, we compared the written output of two student groups: the first group (G1) employed the focusing on form phase by studying a specific aspect of the Italian language, namely the past participle, through examples from native speakers and grammar rules; the second group (G2) focused on form by scrutinizing their own errors and comparing them with analogous examples from a native speaker corpus. In order to test our hypothesis, we created four learner corpora. The initial two were generated during the task phase, with one representing each group of students, while the remaining two were produced as a follow-up activity at the end of the lesson. The results of the first comparison indicated that students' exposure to their own errors can enhance their grasp of a grammatical element. The study is in its second stage and more results are to be announced.

Keywords: Corpus interlanguage analysis, task based learning, Italian language as F1, learner corpora

Procedia PDF Downloads 36
896 Sub-Chronic Exposure to Dexamethasone Impairs Cognitive Function and Insulin in Prefrontal Cortex of Male Wistar Rats

Authors: A. Alli-Oluwafuyi, A. Amin, S. M. Fii, S. O. Amusa, A. Imam, N. T. Asogwa, W. I. Abdulmajeed, F. Olaseinde, B. V. Owoyele

Abstract:

Chronic stress or prolonged glucocorticoid administration impairs higher cognitive functions in rodents and humans. However, the mechanisms are not fully clear. Insulin and receptors are expressed in the brain and are involved in cognition. Insulin resistance accompanies Alzheimer’s disease and associated cognitive decline. The goal of this study was to evaluate the effects of sub-chronic administration of a glucocorticoid, dexamethasone (DEX) on behavior and biochemical changes in prefrontal cortex (PFC). Male Wistar rats were administered DEX (2, 4 & 8 mg/kg, IP) or saline for seven consecutive days and behavior was assessed in the following paradigms: “Y” maze, elevated plus maze, Morris’ water maze and novel object recognition (NOR) tests. Insulin, lactate dehydrogenase (LDH) and Superoxide Dismutase (SOD) activity were evaluated in homogenates of the prefrontal cortex. DEX-treated rats exhibited impaired prefrontal cortex function manifesting as reduced locomotion, impaired novel object exploration and impaired short- and long-term spatial memory compared to normal controls (p < 0.05). These effects were not consistently dose-dependent. These behavioral alterations were accompanied by a decrease in insulin concentration observed in PFC of 4 mg/kg DEX-treated rats compared to control (10μIU/mg vs. 50μIU/mg; p < 0.05) but not 2mg/kg. Furthermore, we report a modification of brain stress markers LDH and SOD (p > 0.05). These results indicate that prolonged activation of GCs disrupt prefrontal cortex function which may be related to insulin impairment. These effects may not be attributable to a non-specific elevation of oxidative stress in the brain. Future studies would evaluate mechanisms of GR-induced insulin loss.

Keywords: dexamethasone, insulin, memory, prefrontal cortex

Procedia PDF Downloads 267
895 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration

Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine

Abstract:

The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.

Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions

Procedia PDF Downloads 180
894 The Effectiveness of Using Picture Storybooks on Young English as a Foreign Language Learners for English Vocabulary Acquisition and Moral Education: A Case Study

Authors: Tiffany Yung Hsuan Ma

Abstract:

The Whole Language Approach, which gained prominence in the 1980s, and the increasing emphasis on multimodal resources in educational research have elevated the utilization of picture books in English as a foreign language (EFL) instruction. This approach underscores real-world language application, providing EFL learners with a range of sensory stimuli, including visual elements. Additionally, the substantial impact of picture books on fostering prosocial behaviors in children has garnered recognition. These narratives offer opportunities to impart essential values such as kindness, fairness, and respect. Examining how picture books enhance vocabulary acquisition can offer valuable insights for educators in devising engaging language activities conducive to a positive learning environment. This research entails a case study involving two kindergarten-aged EFL learners and employs qualitative methods, including worksheets, observations, and interviews with parents. It centers on three pivotal inquiries: (1) The extent of young learners' acquisition of essential vocabulary, (2) The influence of these books on their behavior at home, and (3) Effective teaching strategies for the seamless integration of picture storybooks into EFL instruction for young learners. The findings can provide guidance to parents, educators, curriculum developers, and policymakers regarding the advantages and optimal approaches to incorporating picture books into language instruction. Ultimately, this research has the potential to enhance English language learning outcomes and promote moral education within the Taiwanese EFL context.

Keywords: EFL, vocabulary acquisition, young learners, picture book, moral education

Procedia PDF Downloads 50
893 Into Composer’s Mind: Understanding the Process of Translating Emotions into Music

Authors: Sanam Preet Singh

Abstract:

Music in comparison to any other art form is more reactive and alive. It has the capacity to directly interact with the listener's mind and generate an emotional response. All the major research conducted in the area majorly relied on the listener’s perspective to draw an understanding of music and its effects. There is a very small number of studies which focused on the source from which music originates, the music composers. This study aims to understand the process of how music composers understand and perceive emotions and how they translate them into music, in simpler terms how music composers encode their compositions to express determining emotions. One-to-one in-depth semi structured interviews were conducted, with 8 individuals both male and female, who were professional to intermediate-level music composers and Thematic analysis was conducted to derive the themes. The analysis showed that there is no single process on which music composers rely, rather there are combinations of multiple micro processes, which constitute the understanding and translation of emotions into music. In terms of perception of emotions, the role of processes such as Rumination, mood influence and escapism was discovered in the analysis. Unique themes about the understanding of their top down and bottom up perceptions were also discovered. Further analysis also revealed the role of imagination and emotional trigger explaining how music composers make sense of emotions. The translation process of emotions revealed the role of articulation and instrumentalization, in encoding or translating emotions to a composition. Further, applications of the trial and error method, nature influences and flow in the translation process are also discussed. In the end themes such as parallels between musical patterns and emotions, comfort zones and relatability also emerged during the analysis.

Keywords: comfort zones, escapism, flow, rumination

Procedia PDF Downloads 71
892 Analysis and Control of Camera Type Weft Straightener

Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae

Abstract:

In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.

Keywords: camera type weft straightener, structure analysis, control, skew and bow roller

Procedia PDF Downloads 281
891 The Impact of COVID-19 on the Mental Health of Residents of Saudi Arabia

Authors: Khaleel Alyahya, Faizah Alotaibi

Abstract:

The coronavirus disease 19 (COVID-19) pandemic has caused an increase in general fear and anxiety around the globe. With the public health measures, including lockdown and travel restrictions, the COVID-19 period further resulted in a sudden increase in the vulnerability of people too ill mental health. This becomes greater among individuals who have a history of mental illness or are undergoing treatment and do not have easy access to medication and medical consultations. The study aims to measure the impact of COVID-19 and the degree of distress with the DASS scale on the mental health of residents living in Saudi Arabia. The study is a quantitative, observational, and cross-sectional conducted in Saudi Arabia to measure the impact of COVID-19 on the mental health of both citizens and residents of Saudi Arabia during pandemics. The study ran from February 2021 to June 2021, and a validated questionnaire was used. The targeted population of the study was Saudi citizens and non-Saudi residents. A sample size of 800 participants was calculated with a single proportion formula at 95% level of significance and 5% allowable error. The result revealed that participants who were always doing exercise experienced the lowest level of depression, anxiety, and stress. The highest prevalence of severe and extremely severe depression was among participants who sometimes do exercise at 53.2% for each. Similar results were obtained for anxiety and stress, where the extremely severe form was reported by those who sometimes did exercise at 54.8% and 72.2%, respectively. There was an inverse association between physical activity levels and levels of depression, anxiety, and stress during the COVID-19. Similarly, the levels of depression, anxiety, and stress differed significantly according to the exercise frequency during COVID-19.

Keywords: mental, COVID-19, pandemic, lockdown, depression, anxiety, stress

Procedia PDF Downloads 91
890 Investigating Naming and Connected Speech Impairments in Moroccan AD Patients

Authors: Mounia El Jaouhari, Mira Goral, Samir Diouny

Abstract:

Introduction: Previous research has indicated that language impairments are recognized as a feature of many neurodegenerative disorders, including non-language-led dementia subtypes such as Alzheimer´s disease (AD). In this preliminary study, the focal aim is to quantify the semantic content of naming and connected speech samples of Moroccan patients diagnosed with AD using two tasks taken from the culturally adapted and validated Moroccan version of the Boston Diagnostic Aphasia Examination. Methods: Five individuals with AD and five neurologically healthy individuals matched for age, gender, and education will participate in the study. Participants with AD will be diagnosed on the basis of the Moroccan version of the Diagnostic and Statistial Manual of Mental Disorders (DSM-4) screening test, the Moroccan version of the Mini Mental State Examination (MMSE) test scores, and neuroimaging analyses. The participants will engage in two tasks taken from the MDAE-SF: 1) Picture description and 2) Naming. Expected findings: Consistent with previous studies conducted on English speaking AD patients, we expect to find significant word production and retrieval impairments in AD patients in all measures. Moreover, we expect to find category fluency impairments that further endorse semantic breakdown accounts. In sum, not only will the findings of the current study shed more light on the locus of word retrieval impairments noted in AD, but also reflect the nature of Arabic morphology. In addition, the error patterns are expected to be similar to those found in previous AD studies in other languages.

Keywords: alzheimer's disease, anomia, connected speech, semantic impairments, moroccan arabic

Procedia PDF Downloads 126
889 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 245
888 The Internationalization of Capital Market Influencing Debt Sustainability's Impact on the Growth of the Nigerian Economy

Authors: Godwin Chigozie Okpara, Eugine Iheanacho

Abstract:

The paper set out to assess the sustainability of debt in the Nigerian economy. Precisely, it sought to determine the level of debt sustainability and its impact on the growth of the economy; whether internationalization of capital market has positively influenced debt sustainability’s impact on economic growth; and to ascertain the direction of causality between external debt sustainability and the growth of GDP. In the light of these objectives, ratio analysis was employed for the determination of debt sustainability. Our findings revealed that the periods 1986 – 1994 and 1999 – 2004 were periods of severe unsustainable borrowing. The unit root test showed that the variables of the growth model were integrated of order one, I(1) and the cointegration test provided evidence for long run stability. Considering the dawn of internationalization of capital market, the researcher employed the structural break approach using Chow Breakpoint test on the vector error correction model (VECM). The result of VECM showed that debt sustainability, measured by debt to GDP ratio exerts negative and significant impact on the growth of the economy while debt burden measured by debt-export ratio and debt service export ratio are negative though insignificant on the growth of GDP. The Cho test result indicated that internationalization of capital market has no significant effect on the debt overhang impact on the growth of the Economy. The granger causality test indicates a feedback effect from economic growth to debt sustainability growth indicators. On the bases of these findings, the researchers made some necessary recommendations which if followed religiously will go a long way to ameliorating debt burdens and engendering economic growth.

Keywords: debt sustainability, internalization, capital market, cointegration, chow test

Procedia PDF Downloads 414
887 Relation between Biochemical Parameters and Bone Density in Postmenopausal Women with Osteoporosis

Authors: Shokouh Momeni, Mohammad Reza Salamat, Ali Asghar Rastegari

Abstract:

Background: Osteoporosis is the most prevalent metabolic bone disease in postmenopausal women associated with reduced bone mass and increased bone fracture. Measuring bone density in the lumbar spine and hip is a reliable measure of bone mass and can therefore specify the risk of fracture. Dual-energy X-ray absorptiometry(DXA) is an accurate non-invasive system measuring the bone density, with low margin of error and no complications. The present study aimed to investigate the relationship between biochemical parameters with bone density in postmenopausal women. Materials and methods: This cross-sectional study was conducted on 87 postmenopausal women referred to osteoporosis centers in Isfahan. Bone density was measured in the spine and hip area using DXA system. Serum levels of calcium, phosphorus, alkaline phosphatase and magnesium were measured by autoanalyzer and serum levels of vitamin D were measured by high-performance liquid chromatography(HPLC). Results: The mean parameters of calcium, phosphorus, alkaline phosphatase, vitamin D and magnesium did not show a significant difference between the two groups(P-value>0.05). In the control group, the relationship between alkaline phosphatase and BMC and BA in the spine was significant with a correlation coefficient of -0.402 and 0.258, respectively(P-value<0.05) and BMD and T-score in the femoral neck area showed a direct and significant relationship with phosphorus(Correlation=0.368; P-value=0.038). There was a significant relationship between the Z-score with calcium(Correlation=0.358; P-value=0.044). Conclusion: There was no significant relationship between the values ​​of calcium, phosphorus, alkaline phosphatase, vitamin D and magnesium parameters and bone density (spine and hip) in postmenopaus

Keywords: osteoporosis, menopause, bone mineral density, vitamin d, calcium, magnesium, alkaline phosphatase, phosphorus

Procedia PDF Downloads 153
886 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Turkey: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests no effects of the CO2 emissions and energy use on the GDP in Turkey. There exists a short-run bidirectional relationship between the electricity and natural gas consumption, and also there is a negative unidirectional causality running from the GDP to electricity use. Overall, the results partly support arguments that there are relationships between energy use and economic output; however, the effects may differ due to the source of energy such as in the case of Turkey for the period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis

Procedia PDF Downloads 495
885 Raising Forest Voices: A Cross-Country Comparative Study of Indigenous Peoples’ Engagement with Grassroots Climate Change Mitigation Projects in the Initial Pilot Phase of Community-Based Reducing Emissions from Deforestation and forest Degradation

Authors: Karl D. Humm

Abstract:

The United Nations’ Community-based REDD+ (Reducing Emissions from Deforestation and forest Degradation) (CBR+) is a programme that directly finances grassroots climate change mitigation strategies that uplift Indigenous Peoples (IPs) and other marginalised groups. A pilot for it in six countries was developed in response to criticism of the REDD+ programme for excluding IPs from dialogues about climate change mitigation strategies affecting their lands and livelihoods. Despite the pilot’s conclusion in 2017, no complete report has yet been produced on the results of CBR+. To fill this gap, this study investigated the experiences with involving IPs in the CBR+ programmes and local projects across all six pilot countries. A literature review of official UN reports and academic articles identified challenges and successes with IP participation in REDD+ which became the basis for a framework guiding data collection. A mixed methods approach was used to collect and analyse qualitative and quantitative data from CBR+ documents and written interviews with CBR+ National Coordinators in each country for a cross-country comparative analysis. The study found that the most frequent challenges were lack of organisational capacity, illegal forest activities, and historically-based contentious relationships in IP and forest-dependent communities. Successful programmes included IPs and incorporated respect and recognition of IPs as major stakeholders in managing sustainable forests. Findings are summarized and shared with a set of recommendations for improvement of future projects.

Keywords: climate change, forests, indigenous peoples, REDD+

Procedia PDF Downloads 105
884 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda

Authors: Rutebuka Evariste, Zhang Lixiao

Abstract:

Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.

Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics

Procedia PDF Downloads 328
883 Laser Registration and Supervisory Control of neuroArm Robotic Surgical System

Authors: Hamidreza Hoshyarmanesh, Hosein Madieh, Sanju Lama, Yaser Maddahi, Garnette R. Sutherland, Kourosh Zareinia

Abstract:

This paper illustrates the concept of an algorithm to register specified markers on the neuroArm surgical manipulators, an image-guided MR-compatible tele-operated robot for microsurgery and stereotaxy. Two range-finding algorithms, namely time-of-flight and phase-shift, are evaluated for registration and supervisory control. The time-of-flight approach is implemented in a semi-field experiment to determine the precise position of a tiny retro-reflective moving object. The moving object simulates a surgical tool tip. The tool is a target that would be connected to the neuroArm end-effector during surgery inside the magnet bore of the MR imaging system. In order to apply flight approach, a 905-nm pulsed laser diode and an avalanche photodiode are utilized as the transmitter and receiver, respectively. For the experiment, a high frequency time to digital converter was designed using a field-programmable gate arrays. In the phase-shift approach, a continuous green laser beam with a wavelength of 530 nm was used as the transmitter. Results showed that a positioning error of 0.1 mm occurred when the scanner-target point distance was set in the range of 2.5 to 3 meters. The effectiveness of this non-contact approach exhibited that the method could be employed as an alternative for conventional mechanical registration arm. Furthermore, the approach is not limited by physical contact and extension of joint angles.

Keywords: 3D laser scanner, intraoperative MR imaging, neuroArm, real time registration, robot-assisted surgery, supervisory control

Procedia PDF Downloads 271
882 Artificial intelligence and Law

Authors: Mehrnoosh Abouzari, Shahrokh Shahraei

Abstract:

With the development of artificial intelligence in the present age, intelligent machines and systems have proven their actual and potential capabilities and are mindful of increasing their presence in various fields of human life in the fields of industry, financial transactions, marketing, manufacturing, service affairs, politics, economics and various branches of the humanities .Therefore, despite the conservatism and prudence of law enforcement, the traces of artificial intelligence can be seen in various areas of law. Including judicial robotics capability estimation, intelligent judicial decision making system, intelligent defender and attorney strategy adjustment, dissemination and regulation of different and scattered laws in each case to achieve judicial coherence and reduce opinion, reduce prolonged hearing and discontent compared to the current legal system with designing rule-based systems, case-based, knowledge-based systems, etc. are efforts to apply AI in law. In this article, we will identify the ways in which AI is applied in its laws and regulations, identify the dominant concerns in this area and outline the relationship between these two areas in order to answer the question of how artificial intelligence can be used in different areas of law and what the implications of this application will be. The authors believe that the use of artificial intelligence in the three areas of legislative, judiciary and executive power can be very effective in governments' decisions and smart governance, and helping to reach smart communities across human and geographical boundaries that humanity's long-held dream of achieving is a global village free of violence and personalization and human error. Therefore, in this article, we are going to analyze the dimensions of how to use artificial intelligence in the three legislative, judicial and executive branches of government in order to realize its application.

Keywords: artificial intelligence, law, intelligent system, judge

Procedia PDF Downloads 98
881 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India

Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab

Abstract:

Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.

Keywords: climate change, Coastal Vulnerability Index, global warming, sea level rise

Procedia PDF Downloads 117
880 Ecological impacts of Cage Farming: A Case Study of Lake Victoria, Kenya

Authors: Mercy Chepkirui, Reuben Omondi, Paul Orina, Albert Getabu, Lewis Sitoki, Jonathan Munguti

Abstract:

Globally, the decline in capture fisheries as a result of the growing population and increasing awareness of the nutritional benefits of white meat has led to the development of aquaculture. This is anticipated to meet the increasing call for more food for the human population, which is likely to increase further by 2050. Statistics showed that more than 50% of the global future fish diet will come from aquaculture. Aquaculture began commercializing some decades ago; this is accredited to technological advancement from traditional to modern cultural systems, including cage farming. Cage farming technology has been rapidly growing since its inception in Lake Victoria, Kenya. Currently, over 6,000 cages have been set up in Kenyan waters, and this offers an excellent opportunity for recognition of Kenya’s government tactic to eliminate food insecurity and malnutrition, create employment and promote a Blue Economy. However, being an open farming enterprise is likely to emit large bulk of waste hence altering the ecosystem integrity of the lake. This is through increased chlorophyll-a pigments, alteration of the plankton community, macroinvertebrates, fish genetic pollution, transmission of fish diseases and pathogens. Cage farming further increases the nutrient loads leading to the production of harmful algal blooms, thus negatively affecting aquatic and human life. Despite the ecological transformation, cage farming provides a platform for the achievement of the Sustainable Development Goals of 2030, especially the achievement of food security and nutrition. Therefore, there is a need for Integrated Multitrophic Aquaculture as part of Blue Transformation for ecosystem monitoring.

Keywords: aquaculture, ecosystem, blue economy, food security

Procedia PDF Downloads 60
879 Numerical Simulation of Flow and Heat Transfer Characteristics with Various Working Conditions inside a Reactor of Wet Scrubber

Authors: Jonghyuk Yoon, Hyoungwoon Song, Youngbae Kim, Eunju Kim

Abstract:

Recently, with the rapid growth of semiconductor industry, lots of interests have been focused on after treatment system that remove the polluted gas produced from semiconductor manufacturing process, and a wet scrubber is the one of the widely used system. When it comes to mechanism of removing the gas, the polluted gas is removed firstly by chemical reaction in a reactor part. After that, the polluted gas stream is brought into contact with the scrubbing liquid, by spraying it with the liquid. Effective design of the reactor part inside the wet scrubber is highly important since removal performance of the polluted gas in the reactor plays an important role in overall performance and stability. In the present study, a CFD (Computational Fluid Dynamics) analysis was performed to figure out the thermal and flow characteristics inside unit a reactor of wet scrubber. In order to verify the numerical result, temperature distribution of the numerical result at various monitoring points was compared to the experimental result. The average error rates (12~15%) between them was shown and the numerical result of temperature distribution was in good agreement with the experimental data. By using validated numerical method, the effect of the reactor geometry on heat transfer rate was also taken into consideration. Uniformity of temperature distribution was improved about 15%. Overall, the result of present study could be useful information to identify the fluid behavior and thermal performance for various scrubber systems. This project is supported by the ‘R&D Center for the reduction of Non-CO₂ Greenhouse gases (RE201706054)’ funded by the Korea Ministry of Environment (MOE) as the Global Top Environment R&D Program.

Keywords: semiconductor, polluted gas, CFD (Computational Fluid Dynamics), wet scrubber, reactor

Procedia PDF Downloads 124
878 Service Provision in 'the Jungle': Describing Mental Health and Psychosocial Support Offered to Residents of the Calais Camp

Authors: Amy Darwin, Claire Blacklock

Abstract:

Background: Existing literature about delivering evidence-based mental health and psychosocial support (MHPSS) in emergency settings is limited. It is difficult to monitor and evaluate the approach to MHPSS in informal refugee camps such as ‘The Jungle’ in Calais, where there are multiple service providers and where the majority of providers are volunteers. AIM: To identify experiences of MHPSS delivery by service providers in an informal camp environment in Calais, France and describe MHPSS barriers and opportunities in this type of setting. Method: Qualitative semi-structured interviews were conducted with 13 individuals from different organisations offering MHPSS in Calais and analysed using conventional content analysis. Results: Unsafe, uncertain and unsanitary conditions in the camp meant MHPSS was difficult to implement, and such conditions contributed to the poor mental health of the residents. The majority of MHPSS was offered by volunteers who lacked resources and training, and there was no overall official camp leadership which meant care was poorly coordinated and monitored. Strong relationships existed between volunteers and camp residents, but volunteers felt frustrated that they could not deliver the kind of MHPSS that they felt residents required. Conclusion: While long-term volunteers had built supportive relationships with camp residents, lack of central coordination and leadership of MHPSS services and limited access to trained professionals made implementation of MHPSS problematic. Similarly, the camp lacked the necessary infrastructure to meet residents’ basic needs. Formal recognition of the camp, and clear central leadership were identified as necessary steps to improving MHPSS delivery.

Keywords: calais, mental health, refugees, the jungle, MHPSS

Procedia PDF Downloads 225
877 Evaluation of Ensemble Classifiers for Intrusion Detection

Authors: M. Govindarajan

Abstract:

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy

Procedia PDF Downloads 233
876 Estimation of Source Parameters and Moment Tensor Solution through Waveform Modeling of 2013 Kishtwar Earthquake

Authors: Shveta Puri, Shiv Jyoti Pandey, G. M. Bhat, Neha Raina

Abstract:

TheJammu and Kashmir region of the Northwest Himalaya had witnessed many devastating earthquakes in the recent past and has remained unexplored for any kind of seismic investigations except scanty records of the earthquakes that occurred in this region in the past. In this study, we have used local seismic data of year 2013 that was recorded by the network of Broadband Seismographs in J&K. During this period, our seismic stations recorded about 207 earthquakes including two moderate events of Mw 5.7 on 1st May, 2013 and Mw 5.1 of 2nd August, 2013.We analyzed the events of Mw 3-4.6 and the main events only (for minimizing the error) for source parameters, b value and sense of movement through waveform modeling for understanding seismotectonic and seismic hazard of the region. It has been observed that most of the events are bounded between 32.9° N – 33.3° N latitude and 75.4° E – 76.1° E longitudes, Moment Magnitude (Mw) ranges from Mw 3 to 5.7, Source radius (r), from 0.21 to 3.5 km, stress drop, from 1.90 bars to 71.1 bars and Corner frequency, from 0.39 – 6.06 Hz. The b-value for this region was found to be 0.83±0 from these events which are lower than the normal value (b=1), indicating the area is under high stress. The travel time inversion and waveform inversion method suggest focal depth up to 10 km probably above the detachment depth of the Himalayan region. Moment tensor solution of the (Mw 5.1, 02:32:47 UTC) main event of 2ndAugust suggested that the source fault is striking at 295° with dip of 33° and rake value of 85°. It was found that these events form intense clustering of small to moderate events within a narrow zone between Panjal Thrust and Kishtwar Window. Moment tensor solution of the main events and their aftershocks indicating thrust type of movement is occurring in this region.

Keywords: b-value, moment tensor, seismotectonics, source parameters

Procedia PDF Downloads 299
875 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 251
874 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms

Authors: Bliss Singhal

Abstract:

Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.

Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression

Procedia PDF Downloads 65
873 Biophysical Characterization of the Inhibition of cGAS-DNA Sensing by KicGAS, Kaposi's Sarcoma-Associated Herpesvirus Inhibitor of cGAS

Authors: D. Bhowmik, Y. Tian, Q. Yin, F. Zhu

Abstract:

Cyclic GMP-AMP synthase (cGAS), recognises cytoplasmic double-stranded DNA (dsDNA), indicative of bacterial and viral infections, as well as the leakage of self DNA by cellular dysfunction and stresses, to elicit the host's immune responses. Viruses also have developed numerous strategies to antagonize the cGAS-STING pathway. Kaposi's sarcoma-associated herpesvirus (KSHV) is a human DNA tumor virus that is the causative agent of Kaposi’s sarcoma and several other malignancies. To persist in the host, consequently causing diseases, KSHV must overcome the host innate immune responses, including the cGAS-STING DNA sensing pathway. We already found that ORF52 or KicGAS (KSHV inhibitor of cGAS), an abundant and basic gamma herpesvirus-conserved tegument protein, directly inhibits cGAS enzymatic activity. To better understand the mechanism, we have performed the biochemical and structural characterization of full-length KicGAS and various mutants in regarding binding to DNA. We observed that KicGAS is capable of self-association and identified the critical residues involved in the oligomerization process. We also characterized the DNA-binding of KicGAS and found that KicGAS cooperatively oligomerizes along the length of the double stranded DNA, the highly conserved basic residues at the c-terminal disordered region are crucial for DNA recognition. Deficiency in oligomerization also affects DNA binding. Thus DNA binding by KicGAS sequesters DNA and prevents it from being detected by cGAS, consequently inhibiting cGAS activation. KicGAS homologues also inhibit cGAS efficiently, suggesting inhibition of cGAS is evolutionarily conserved mechanism among gamma herpesvirus. These results highlight the important viral strategy to evade this innate immune sensor.

Keywords: Kaposi's sarcoma-associated herpesvirus, KSHV, cGAS, DNA binding, inhibition

Procedia PDF Downloads 113
872 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model

Authors: Aid Abdelkrim

Abstract:

A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.

Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading

Procedia PDF Downloads 381
871 Determination and Distribution of Formation Thickness Using Seismic and Well Data in Baga/Lake Sub-basin, Chad Basin Nigeria

Authors: Gabriel Efomeh Omolaiye, Olatunji Seminu, Jimoh Ajadi, Yusuf Ayoola Jimoh

Abstract:

The Nigerian part of the Chad Basin till date has been one of the few critically studied basins, with few published scholarly works, compared to other basins such as Niger Delta, Dahomey, etc. This work was undertaken by the integration of 3D seismic interpretations and the well data analysis of eight wells fairly distributed in block A, Baga/Lake sub-basin in Borno basin with the aim of determining the thickness of Chad, Kerri-Kerri, Fika, and Gongila Formations in the sub-basin. Da-1 well (type-well) used in this study was subdivided into stratigraphic units based on the regional stratigraphic subdivision of the Chad basin and was later correlated with other wells using similarity of observed log responses. The combined density and sonic logs were used to generate synthetic seismograms for seismic to well ties. Five horizons were mapped, representing the tops of the formations on the 3D seismic data covering the block; average velocity function with maximum error/residual of 0.48% was adopted in the time to depth conversion of all the generated maps. There is a general thickening of sediments from the west to the east, and the estimated thicknesses of the various formations in the Baga/Lake sub-basin are Chad Formation (400-750 m), Kerri-Kerri Formation (300-1200 m), Fika Formation (300-2200 m) and Gongila Formation (100-1300 m). The thickness of the Bima Formation could not be established because the deepest well (Da-1) terminates within the formation. This is a modification to the previous and widely referenced studies of over forty decades that based the estimation of formation thickness within the study area on the observed outcrops at different locations and the use of few well data.

Keywords: Baga/Lake sub-basin, Chad basin, formation thickness, seismic, velocity

Procedia PDF Downloads 159
870 Hybrid Velocity Control Approach for Tethered Aerial Vehicle

Authors: Lovesh Goyal, Pushkar Dave, Prajyot Jadhav, GonnaYaswanth, Sakshi Giri, Sahil Dharme, Rushika Joshi, Rishabh Verma, Shital Chiddarwar

Abstract:

With the rising need for human-robot interaction, researchers have proposed and tested multiple models with varying degrees of success. A few of these models performed on aerial platforms are commonly known as Tethered Aerial Systems. These aerial vehicles may be powered continuously by a tether cable, which addresses the predicament of the short battery life of quadcopters. This system finds applications to minimize humanitarian efforts for industrial, medical, agricultural, and service uses. However, a significant challenge in employing such systems is that it necessities attaining smooth and secure robot-human interaction while ensuring that the forces from the tether remain within the standard comfortable range for the humans. To tackle this problem, a hybrid control method that could switch between two control techniques: constant control input and the steady-state solution, is implemented. The constant control approach is implemented when a person is far from the target location, and error is thought to be eventually constant. The controller switches to the steady-state approach when the person reaches within a specific range of the goal position. Both strategies take into account human velocity feedback. This hybrid technique enhances the outcomes by assisting the person to reach the desired location while decreasing the human's unwanted disturbance throughout the process, thereby keeping the interaction between the robot and the subject smooth.

Keywords: unmanned aerial vehicle, tethered system, physical human-robot interaction, hybrid control

Procedia PDF Downloads 83
869 Cooperative Cross Layer Topology for Concurrent Transmission Scheduling Scheme in Broadband Wireless Networks

Authors: Gunasekaran Raja, Ramkumar Jayaraman

Abstract:

In this paper, we consider CCL-N (Cooperative Cross Layer Network) topology based on the cross layer (both centralized and distributed) environment to form network communities. Various performance metrics related to the IEEE 802.16 networks are discussed to design CCL-N Topology. In CCL-N topology, nodes are classified as master nodes (Master Base Station [MBS]) and serving nodes (Relay Station [RS]). Nodes communities are organized based on the networking terminologies. Based on CCL-N Topology, various simulation analyses for both transparent and non-transparent relays are tabulated and throughput efficiency is calculated. Weighted load balancing problem plays a challenging role in IEEE 802.16 network. CoTS (Concurrent Transmission Scheduling) Scheme is formulated in terms of three aspects – transmission mechanism based on identical communities, different communities and identical node communities. CoTS scheme helps in identifying the weighted load balancing problem. Based on the analytical results, modularity value is inversely proportional to that of the error value. The modularity value plays a key role in solving the CoTS problem based on hop count. The transmission mechanism for identical node community has no impact since modularity value is same for all the network groups. In this paper three aspects of communities based on the modularity value which helps in solving the problem of weighted load balancing and CoTS are discussed.

Keywords: cross layer network topology, concurrent scheduling, modularity value, network communities and weighted load balancing

Procedia PDF Downloads 245
868 Use of Smartwatches for the Emotional Self-Regulation of Individuals with Autism Spectrum Disorder (ASD)

Authors: Juan C. Torrado, Javier Gomez, Guadalupe Montero, German Montoro, M. Dolores Villalba

Abstract:

One of the most challenging aspects of the executive dysfunction of people with Autism Spectrum Disorders is the behavior control. This is related to a deficit in their ability to regulate, recognize and manage their own emotions. Some researchers have developed applications for tablets and smartphones to practice strategies of relaxation and emotion recognition. However, they cannot be applied to the very moment of temper outbursts, anger episodes or anxiety, since they require to carry the device, start the application and be helped by caretakers. Also, some of these systems are developed for either obsolete technologies (old versions of tablet devices, PDAs, outdated operative systems of smartphones) or specific devices (self-developed or proprietary ones) that create differentiation between the users and the rest of the individuals in their context. For this project we selected smartwatches. Focusing on emergent technologies ensures a wide lifespan of the developed products, because the derived products are intended to be available in the same moment the very technology gets popularized, not later. We also focused our research in commercial versions of smartwatches, since this way differentiation is easily avoided, so the users’ abandonment rate lowers. We have developed a smartwatch system along with a smartphone authoring tool to display self-regulation strategies. These micro-prompting strategies are conformed of pictograms, animations and temporizers, and they are designed by means of the authoring tool: When both devices synchronize their data, the smartwatch holds the self-regulation strategies, which are triggered when the smartwatch sensors detect a remarkable rise of heart rate and movement. The system is being currently tested in an educational center of people with ASD of Madrid, Spain.

Keywords: assistive technologies, emotion regulation, human-computer interaction, smartwatches

Procedia PDF Downloads 280