Search results for: parallel techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7564

Search results for: parallel techniques

3484 Social Semantic Web-Based Analytics Approach to Support Lifelong Learning

Authors: Khaled Halimi, Hassina Seridi-Bouchelaghem

Abstract:

The purpose of this paper is to describe how learning analytics approaches based on social semantic web techniques can be applied to enhance the lifelong learning experiences in a connectivist perspective. For this reason, a prototype of a system called SoLearn (Social Learning Environment) that supports this approach. We observed and studied literature related to lifelong learning systems, social semantic web and ontologies, connectivism theory, learning analytics approaches and reviewed implemented systems based on these fields to extract and draw conclusions about necessary features for enhancing the lifelong learning process. The semantic analytics of learning can be used for viewing, studying and analysing the massive data generated by learners, which helps them to understand through recommendations, charts and figures their learning and behaviour, and to detect where they have weaknesses or limitations. This paper emphasises that implementing a learning analytics approach based on social semantic web representations can enhance the learning process. From one hand, the analysis process leverages the meaning expressed by semantics presented in the ontology (relationships between concepts). From the other hand, the analysis process exploits the discovery of new knowledge by means of inferring mechanism of the semantic web.

Keywords: connectivism, learning analytics, lifelong learning, social semantic web

Procedia PDF Downloads 195
3483 A Self-Coexistence Strategy for Spectrum Allocation Using Selfish and Unselfish Game Models in Cognitive Radio Networks

Authors: Noel Jeygar Robert, V. K.Vidya

Abstract:

Cognitive radio is a software-defined radio technology that allows cognitive users to operate on the vacant bands of spectrum allocated to licensed users. Cognitive radio plays a vital role in the efficient utilization of wireless radio spectrum available between cognitive users and licensed users without making any interference to licensed users. The spectrum allocation followed by spectrum sharing is done in a fashion where a cognitive user has to wait until spectrum holes are identified and allocated when the licensed user moves out of his own allocated spectrum. In this paper, we propose a self –coexistence strategy using bargaining and Cournot game model for achieving spectrum allocation in cognitive radio networks. The game-theoretic model analyses the behaviour of cognitive users in both cooperative and non-cooperative scenarios and provides an equilibrium level of spectrum allocation. Game-theoretic models such as bargaining game model and Cournot game model produce a balanced distribution of spectrum resources and energy consumption. Simulation results show that both game theories achieve better performance compared to other popular techniques

Keywords: cognitive radio, game theory, bargaining game, Cournot game

Procedia PDF Downloads 273
3482 Cellular Degradation Activity is Activated by Ambient Temperature Reduction in an Annual Fish (Nothobranchius rachovii)

Authors: Cheng-Yen Lu, Chin-Yuan Hsu

Abstract:

Ambient temperature reduction (ATR) can extend the lifespan of an annual fish (Nothobranchius rachovii), but the underlying mechanism is unknown. In this study, the expression, concentration, and activity of cellular-degraded molecules were evaluated in the muscle of N. rachovii reared under high (30 °C), moderate (25 °C), and low (20 °C) ambient temperatures by biochemical techniques. The results showed that (i) the activity of the 20S proteasome, the expression of microtubule-associated protein 1 light chain 3-II (LC3-II), the expression of lysosome-associated membrane protein type 2a (Lamp 2a), and lysosome activity increased with ATR; (ii) the expression of the 70 kD heat shock cognate protein (Hsc 70) decreased with ATR; (iii) the expression of the 20S proteasome, the expression of lysosome-associated membrane protein type 1 (Lamp 1), the expression of molecular target of rapamycin (mTOR), the expression of phosphorylated mTOR (p-mTOR), and the p-mTOR/mTOR ratio did not change with ATR. These findings indicated that ATR activated the activity of proteasome, macroautophagy, and chaperone-mediated autophagy. Taken together these data reveal that ATR likely activates cellular degradation activity to extend the lifespan of N. rachovii.

Keywords: ambient temperature reduction, autophagy, degradation activity, lifespan, proteasome

Procedia PDF Downloads 446
3481 A Dynamic Ensemble Learning Approach for Online Anomaly Detection in Alibaba Datacenters

Authors: Wanyi Zhu, Xia Ming, Huafeng Wang, Junda Chen, Lu Liu, Jiangwei Jiang, Guohua Liu

Abstract:

Anomaly detection is a first and imperative step needed to respond to unexpected problems and to assure high performance and security in large data center management. This paper presents an online anomaly detection system through an innovative approach of ensemble machine learning and adaptive differentiation algorithms, and applies them to performance data collected from a continuous monitoring system for multi-tier web applications running in Alibaba data centers. We evaluate the effectiveness and efficiency of this algorithm with production traffic data and compare with the traditional anomaly detection approaches such as a static threshold and other deviation-based detection techniques. The experiment results show that our algorithm correctly identifies the unexpected performance variances of any running application, with an acceptable false positive rate. This proposed approach has already been deployed in real-time production environments to enhance the efficiency and stability in daily data center operations.

Keywords: Alibaba data centers, anomaly detection, big data computation, dynamic ensemble learning

Procedia PDF Downloads 181
3480 An Empirical Study to Predict Myocardial Infarction Using K-Means and Hierarchical Clustering

Authors: Md. Minhazul Islam, Shah Ashisul Abed Nipun, Majharul Islam, Md. Abdur Rakib Rahat, Jonayet Miah, Salsavil Kayyum, Anwar Shadaab, Faiz Al Faisal

Abstract:

The target of this research is to predict Myocardial Infarction using unsupervised Machine Learning algorithms. Myocardial Infarction Prediction related to heart disease is a challenging factor faced by doctors & hospitals. In this prediction, accuracy of the heart disease plays a vital role. From this concern, the authors have analyzed on a myocardial dataset to predict myocardial infarction using some popular Machine Learning algorithms K-Means and Hierarchical Clustering. This research includes a collection of data and the classification of data using Machine Learning Algorithms. The authors collected 345 instances along with 26 attributes from different hospitals in Bangladesh. This data have been collected from patients suffering from myocardial infarction along with other symptoms. This model would be able to find and mine hidden facts from historical Myocardial Infarction cases. The aim of this study is to analyze the accuracy level to predict Myocardial Infarction by using Machine Learning techniques.

Keywords: Machine Learning, K-means, Hierarchical Clustering, Myocardial Infarction, Heart Disease

Procedia PDF Downloads 188
3479 Vibration-Based Data-Driven Model for Road Health Monitoring

Authors: Guru Prakash, Revanth Dugalam

Abstract:

A road’s condition often deteriorates due to harsh loading such as overload due to trucks, and severe environmental conditions such as heavy rain, snow load, and cyclic loading. In absence of proper maintenance planning, this results in potholes, wide cracks, bumps, and increased roughness of roads. In this paper, a data-driven model will be developed to detect these damages using vibration and image signals. The key idea of the proposed methodology is that the road anomaly manifests in these signals, which can be detected by training a machine learning algorithm. The use of various machine learning techniques such as the support vector machine and Radom Forest method will be investigated. The proposed model will first be trained and tested with artificially simulated data, and the model architecture will be finalized by comparing the accuracies of various models. Once a model is fixed, the field study will be performed, and data will be collected. The field data will be used to validate the proposed model and to predict the future road’s health condition. The proposed will help to automate the road condition monitoring process, repair cost estimation, and maintenance planning process.

Keywords: SVM, data-driven, road health monitoring, pot-hole

Procedia PDF Downloads 69
3478 Optimization in Friction Stir Processing Method with Emphasis on Optimized Process Parameters Laboratory Research

Authors: Atabak Rahimzadeh Ilkhch

Abstract:

Friction stir processing (FSP) has promised for application of thermo-mechanical processing techniques where aims to change the micro structural and mechanical properties of materials in order to obtain high performance and reducing the production time and cost. There are lots of studies focused on the microstructure of friction stir welded aluminum alloys. The main focus of this research is on the grain size obtained in the weld zone. Moreover in second part focused on temperature distribution effect over the entire weld zone and its effects on the microstructure. Also, there is a need to have more efforts on investigating to obtain the optimal value of effective parameters such as rotational speed on microstructure and to use the optimum tool designing method. the final results of this study will be present the variation of structural and mechanical properties of materials in the base of applying Friction stir processing and effect of (FSP) processing and tensile testing on surface quality. in the hand, this research addresses the FSP f AA-7020 aluminum and variation f ration of rotation and translational speeds.

Keywords: friction stir processing, AA-7020, thermo-mechanical, microstructure, temperature

Procedia PDF Downloads 263
3477 Carbon Accounting for Sustainable Design and Manufacturing in the Signage Industry

Authors: Prudvi Paresi, Fatemeh Javidan

Abstract:

In recent years, greenhouse gas, or in particular, carbon emissions, have received special attention from environmentalists and designers due to the fact that they significantly contribute to the temperature rise. The building industry is one of the top seven major industries contributing to embodied carbon emission. Signage systems are an integral part of the building industry and bring completeness to the space-building by providing the required information and guidance. A significant amount of building materials, such as steel, aluminium, acrylic, LED, etc., are utilized in these systems, but very limited information is available on their sustainability and carbon footprint. Therefore, there is an urgent need to assess the emissions associated with the signage industry and for controlling these by adopting different mitigation techniques without sacrificing the efficiency of the project. The present paper investigates the embodied carbon of two case studies in the Australian signage industry within the cradle – gate (A1-A3) and gate–site (A4-A5) stages. A material source-based database is considered to achieve more accuracy. The study identified that aluminium is the major contributor to embodied carbon in the signage industry compared to other constituents. Finally, an attempt is made to suggest strategies for mitigating embodied carbon in this industry.

Keywords: carbon accounting, small-scale construction, signage industry, construction materials

Procedia PDF Downloads 97
3476 A Literature Review on Development of a Forecast Supported Approach for the Continuous Pre-Planning of Required Transport Capacity for the Design of Sustainable Transport Chains

Authors: Georg Brunnthaller, Sandra Stein, Wilfried Sihn

Abstract:

Logistics service providers are facing increasing volatility concerning future transport demand. Short-term planning horizons and planning uncertainties lead to reduced capacity utilisation and increasing empty mileage. To overcome these challenges, a model is proposed to continuously pre-plan future transport capacity in order to redesign and adjust the intermodal fleet accordingly. It is expected that the model will enable logistics service providers to organise more economically and ecologically sustainable transport chains in a more flexible way. To further describe such planning aspects, this paper gives a structured literature review on transport planning problems. The focus is on strategic and tactical planning levels, comprising relevant fleet-sizing-, network-design- and choice-of-carriers-problems. Models and their developed solution techniques are presented and the literature review is concluded with an outlook to our future research objectives

Keywords: choice of transport mode, fleet-sizing, freight transport planning, multimodal, review, service network design

Procedia PDF Downloads 350
3475 Childhood Trauma and Identity in Adulthood

Authors: Aakriti Lohiya

Abstract:

This study examines the commonly recognised childhood trauma that can have a significant and enduring effect on a person's cognitive and psychological health. The purpose of this study was to look at the intricate interactions that exist between negative self-identity, cognitive distortions, and early trauma. For the study, a sample of (200 women were taken, who were socially active) was gathered. Standardised measures were utilised to evaluate the participants' experiences of childhood trauma, and validated psychological tools were employed to assess negative self-identity and cognitive distortions. The links and predicting correlations between childhood trauma, negative self-identity, and cognitive distortions were investigated using statistical techniques, such as correlation analysis and multiple regression modelling. The results demonstrated that there is no correlation between the degree of early trauma and the emergence of a negative self-identity and cognitive distortions. It examines whether cognitive distortion and events in childhood have any relationship with negative self-identity using various scales. Participants completed the Childhood Trauma Questionnaire, which assessed retrospective accounts of childhood trauma; the Cognitive Distortions Scale, which measured internal attributions and perceptions of controllability; and the attachment style questionnaire, which assessed the attachment attribute of their daily life, which will lead negative. The implications for therapy were also considered.

Keywords: cognitive distortion, therapy, childhood trauma, attachment

Procedia PDF Downloads 65
3474 Adaptability in Older People: A Mixed Methods Approach

Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger

Abstract:

Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.

Keywords: adaptability, intervention study, mixed methods, nursing home residents

Procedia PDF Downloads 132
3473 Regional Rates of Sand Supply to the New South Wales Coast: Southeastern Australia

Authors: Marta Ribo, Ian D. Goodwin, Thomas Mortlock, Phil O’Brien

Abstract:

Coastal behavior is best investigated using a sediment budget approach, based on the identification of sediment sources and sinks. Grain size distribution over the New South Wales (NSW) continental shelf has been widely characterized since the 1970’s. Coarser sediment has generally accumulated on the outer shelf, and/or nearshore zones, with the latter related to the presence of nearshore reef and bedrocks. The central part of the NSW shelf is characterized by the presence of fine sediments distributed parallel to the coastline. This study presents new grain size distribution maps along the NSW continental shelf, built using all available NSW and Commonwealth Government holdings. All available seabed bathymetric data form prior projects, single and multibeam sonar, and aerial LiDAR surveys were integrated into a single bathymetric surface for the NSW continental shelf. Grain size information was extracted from the sediment sample data collected in more than 30 studies. The information extracted from the sediment collections varied between reports. Thus, given the inconsistency of the grain size data, a common grain size classification was her defined using the phi scale. The new sediment distribution maps produced, together with new detailed seabed bathymetric data enabled us to revise the delineation of sediment compartments to more accurately reflect the true nature of sediment movement on the inner shelf and nearshore. Accordingly, nine primary mega coastal compartments were delineated along the NSW coast and shelf. The sediment compartments are bounded by prominent nearshore headlands and reefs, and major river and estuarine inlets that act as sediment sources and/or sinks. The new sediment grain size distribution was used as an input in the morphological modelling to quantify the sediment transport patterns (and indicative rates of transport), used to investigate sand supply rates and processes from the lower shoreface to the NSW coast. The rate of sand supply to the NSW coast from deep water is a major uncertainty in projecting future coastal response to sea-level rise. Offshore transport of sand is generally expected as beaches respond to rising sea levels but an onshore supply from the lower shoreface has the potential to offset some of the impacts of sea-level rise, such as coastline recession. Sediment exchange between the lower shoreface and sub-aerial beach has been modelled across the south, central, mid-north and far-north coast of NSW. Our model approach is that high-energy storm events are the primary agents of sand transport in deep water, while non-storm conditions are responsible for re-distributing sand within the beach and surf zone.

Keywords: New South Wales coast, off-shore transport, sand supply, sediment distribution maps

Procedia PDF Downloads 217
3472 Comparison of the Boundary Element Method and the Method of Fundamental Solutions for Analysis of Potential and Elasticity

Authors: S. Zenhari, M. R. Hematiyan, A. Khosravifard, M. R. Feizi

Abstract:

The boundary element method (BEM) and the method of fundamental solutions (MFS) are well-known fundamental solution-based methods for solving a variety of problems. Both methods are boundary-type techniques and can provide accurate results. In comparison to the finite element method (FEM), which is a domain-type method, the BEM and the MFS need less manual effort to solve a problem. The aim of this study is to compare the accuracy and reliability of the BEM and the MFS. This comparison is made for 2D potential and elasticity problems with different boundary and loading conditions. In the comparisons, both convex and concave domains are considered. Both linear and quadratic elements are employed for boundary element analysis of the examples. The discretization of the problem domain in the BEM, i.e., converting the boundary of the problem into boundary elements, is relatively simple; however, in the MFS, obtaining appropriate locations of collocation and source points needs more attention to obtain reliable solutions. The results obtained from the presented examples show that both methods lead to accurate solutions for convex domains, whereas the BEM is more suitable than the MFS for concave domains.

Keywords: boundary element method, method of fundamental solutions, elasticity, potential problem, convex domain, concave domain

Procedia PDF Downloads 76
3471 Algal/Bacterial Membrane Bioreactor for Bioremediation of Chemical Industrial Wastewater Containing 1,4 Dioxane

Authors: Ahmed Tawfik

Abstract:

Oxidation of 1,4 dioxane produces metabolites by-products involving glycolaldehyde and acids that have geno- and cytotoxicity impact on microbial degradation. Thereby, the incorporation of algae with bacteria in the treatment system would eliminate and overcome the accumulation of metabolites that are utilized as a carbon source for the build-up of biomass. Therefore, the aim of the present study is to assess the potential of algae/bacteria-based membrane bioreactor (AB-MBR) for biodegradation of 1,4 dioxane-rich wastewater at a high imposed loading rate. Three identical reactors, i.e., AB-MBR1, AB-MBR2, and AB-MBR3, were operated in parallel at 1,4 dioxane loading rates of 641.7, 320.9, and 160.4 mg/L. d., and HRTs of 6.0, 12 and 24 h. respectively. The AB-MBR1 achieved 1,4 dioxane removal rate of 263.7 mg/L.d., where the residual value in the treated effluent amounted to 94.4±22.9 mg/L. Reducing the 1,4 dioxane loading rate (LR) to 320.9 mg/L.d in the AB-MBR2 maximized the removal rate efficiency of 265.9 mg/L.d., with a removal efficiency of 82.8±3.2%. The minimum value of 1,4 dioxane of 17.3±1.8 mg/L in the treated effluent of AB-MBR3 was obtained at an HRT of 24.0 h and loading rate of 160.4 mg/L.d. The mechanism of 1,4 dioxane degradation in AB-MBR was a combination of volatilization (8.03±0.6%), UV oxidation (14.1±0.9%), microbial biodegradation (49.1±3.9%) and absorption/uptake and assimilation by algae (28.8±2.%). Further, the Thioclava, Afipia, and Mycobacterium genera oxidized and produced the required enzymes for hydrolysis and cleavage of the dioxane ring into 2-hydroxy-1,4 dioxane. Moreover, the fungi, i.e., Basidiomycota and Cryptomycota, played a big role in the degradation of the 1,4 dioxane into 2-hydroxy-1,4 dioxane. Xanthobacter and Mesorhizobium were involved in the metabolism process by secreting alcohol dehydrogenase (ADH), aldehyde dehydrogenase (ALDH), and glycolate oxidase. Bacteria and fungi produced dehydrogenase (DH) for the transformation of 2-hydroxy-1,4 dioxane into 2-hydroxy-ethoxyacetaldehyde. The latter is converted into Ethylene glycol by Aldehyde hydrogenase (ALDH). Ethylene glycol is oxidized into acids using Alcohol hydrogenase (ADH). The Diatomea, Chlorophyta, and Streptophyta utilize the metabolites for biomass assimilation and produce the required oxygen for further oxidation of the dioxane and its metabolites by-products of bacteria and fungi. The major portion of metabolites (ethylene glycol, glycolic acid, and oxalic acid were removed due to uptake and absorption by algae (43±4.3%), followed by adsorption (18.4±0.9%). The volatilization and UV oxidation contribution for the degradation of metabolites were 8.7±0.7% and 12.3±0.8%, respectively. The capabilities of genera Defluviimonas, Thioclava, Luteolibacter, and Afipia. The genera of Defluviimonas, Thioclava, Luteolibacter, and Mycobacterium were grown under a high 1,4 dioxane LR of 641.7 mg/L.d. The Chlorophyta (4.1-43.6%), Streptophyta (2.5-21.7%), and Diatomea (0.8-1.4%) phyla were dominant for degradation of 1,4 dioxane. The results of this study strongly demonstrated that the bioremediation and bioaugmentation process can safely remove 1,4 dioxane from industrial wastewater while minimizing environmental concerns and reducing economic costs.

Keywords: wastewater, membrane bioreactor, bacterial community, algal community

Procedia PDF Downloads 32
3470 Study of Bolt Inclination in a Composite Single Bolted Joint

Authors: Faci Youcef, Ahmed Mebtouche, Djillali Allou, Maalem Badredine

Abstract:

The inclination of the bolt in a fastened joint of composite material during a tensile test can be influenced by several parameters, including material properties, bolt diameter and length, the type of composite material being used, the size and dimensions of the bolt, bolt preload, surface preparation, the design and configuration of the joint, and finally testing conditions. These parameters should be carefully considered and controlled to ensure accurate and reliable results during tensile testing of composite materials with fastened joints. Our work focuses on the effect of the stacking sequence and the geometry of specimens. An experimental test is carried out to obtain the inclination of a bolt during a tensile test of a composite material using acoustic emission and digital image correlation. Several types of damage were obtained during the load. Digital image correlation techniques permit the obtaining of the inclination of bolt angle value during tensile test. We concluded that the inclination of the bolt during a tensile test of a composite material can be related to the damage that occurs in the material. It can cause stress concentrations and localized deformation in the material, leading to damage such as delamination, fiber breakage, matrix cracking, and other forms of failure.

Keywords: damage, inclination, analyzed, carbon

Procedia PDF Downloads 42
3469 Numerical Modeling of Wave Run-Up in Shallow Water Flows Using Moving Wet/Dry Interfaces

Authors: Alia Alghosoun, Michael Herty, Mohammed Seaid

Abstract:

We present a new class of numerical techniques to solve shallow water flows over dry areas including run-up. Many recent investigations on wave run-up in coastal areas are based on the well-known shallow water equations. Numerical simulations have also performed to understand the effects of several factors on tsunami wave impact and run-up in the presence of coastal areas. In all these simulations the shallow water equations are solved in entire domain including dry areas and special treatments are used for numerical solution of singularities at these dry regions. In the present study we propose a new method to deal with these difficulties by reformulating the shallow water equations into a new system to be solved only in the wetted domain. The system is obtained by a change in the coordinates leading to a set of equations in a moving domain for which the wet/dry interface is the reconstructed using the wave speed. To solve the new system we present a finite volume method of Lax-Friedrich type along with a modified method of characteristics. The method is well-balanced and accurately resolves dam-break problems over dry areas.

Keywords: dam-break problems, finite volume method, run-up waves, shallow water flows, wet/dry interfaces

Procedia PDF Downloads 132
3468 Improving Machine Learning Translation of Hausa Using Named Entity Recognition

Authors: Aishatu Ibrahim Birma, Aminu Tukur, Abdulkarim Abbass Gora

Abstract:

Machine translation plays a vital role in the Field of Natural Language Processing (NLP), breaking down language barriers and enabling communication across diverse communities. In the context of Hausa, a widely spoken language in West Africa, mainly in Nigeria, effective translation systems are essential for enabling seamless communication and promoting cultural exchange. However, due to the unique linguistic characteristics of Hausa, accurate translation remains a challenging task. The research proposes an approach to improving the machine learning translation of Hausa by integrating Named Entity Recognition (NER) techniques. Named entities, such as person names, locations, organizations, and dates, are critical components of a language's structure and meaning. Incorporating NER into the translation process can enhance the quality and accuracy of translations by preserving the integrity of named entities and also maintaining consistency in translating entities (e.g., proper names), and addressing the cultural references specific to Hausa. The NER will be incorporated into Neural Machine Translation (NMT) for the Hausa to English Translation.

Keywords: machine translation, natural language processing (NLP), named entity recognition (NER), neural machine translation (NMT)

Procedia PDF Downloads 23
3467 Barriers to Entry: The Pitfall of Charter School Accountability

Authors: Ian Kingsbury

Abstract:

The rapid expansion of charter schools (public schools that receive government but do not face the same regulations as traditional public schools) over the preceding two decades has raised concerns over the potential for graft and fraud. These concerns are largely justified: Incidents of financial crime and mismanagement are not unheard of, and the charter sector has become a darling of hedge fund managers. In response, several states have strengthened their charter school regulatory regimes. Imposing regulations and attempting to increase accountability seem like sensible measures, and perhaps they are necessary. However, increased regulation may come at the cost of imposing barriers to entry. Specifically, increased regulation often entails evidence for a high likelihood of fiscal solvency. That should theoretically entail access to capital in the short-term, which may systematically preclude Black or Hispanic applicants from opening charter schools. Moreover, increased regulation necessarily entails more red tape. The institutional wherewithal and the number of hours required to complete an application to open a charter school might favor those who have partnered with an education service provider, specifically a charter management organization (CMO) or education management organization (EMO). These potential barriers to entry pose a significant policy concern. Just as policymakers hope to increase the share of minority teachers and principals, they should sensibly care whether individuals who open charter schools look like the students in that school. Moreover, they might be concerned if successful applications in states with stringent regulations are overwhelmingly affiliated with education service providers. One of the original missions of charter schools was to serve as a laboratory of innovation. Approving only those applications affiliated with education service providers (and in effect establishing a parallel network of schools rather than a diverse marketplace of schools) undermines that mission. Data and methods: The analysis examines more than 2,000 charter school applications from 15 states. It compares the outcomes of applications from states with a strong regulatory environment (those with high scores) from NACSA-the National Association of Charter School Authorizers- to applications from states with a weak regulatory environment (those with a low NACSA score). If the hypothesis is correct, applicants not affiliated with an ESP are more likely to be rejected in high-regulation states compared to those affiliated with an ESP, and minority candidates not affiliated with an education service provider (ESP) are particularly likely to be rejected. Initial returns indicate that the hypothesis holds. More applications in low NASCA-scoring Arizona come from individuals not associated with an ESP, and those individuals are as likely to be accepted as those affiliated with an ESP. On the other hand, applicants in high-NACSA scoring Indiana and Ohio are more than 20 percentage points more likely to be accepted if they are affiliated with an ESP, and the effect is particularly pronounced for minority candidates. These findings should spur policymakers to consider the drawbacks of charter school accountability and consider accountability regimes that do not impose barriers to entry.

Keywords: accountability, barriers to entry, charter schools, choice

Procedia PDF Downloads 137
3466 Performance Analysis of Permanent Magnet Synchronous Motor Using Direct Torque Control Based ANFIS Controller for Electric Vehicle

Authors: Marulasiddappa H. B., Pushparajesh Viswanathan

Abstract:

Day by day, the uses of internal combustion engines (ICE) are deteriorating because of pollution and less fuel availability. In the present scenario, the electric vehicle (EV) plays a major role in the place of an ICE vehicle. The performance of EVs can be improved by the proper selection of electric motors. Initially, EV preferred induction motors for traction purposes, but due to complexity in controlling induction motor, permanent magnet synchronous motor (PMSM) is replacing induction motor in EV due to its advantages. Direct torque control (DTC) is one of the known techniques for PMSM drive in EV to control the torque and speed. However, the presence of torque ripple is the main drawback of this technique. Many control strategies are followed to reduce the torque ripples in PMSM. In this paper, the adaptive neuro-fuzzy inference system (ANFIS) controller technique is proposed to reduce torque ripples and settling time. Here the performance parameters like torque, speed and settling time are compared between conventional proportional-integral (PI) controller with ANFIS controller.

Keywords: direct torque control, electric vehicle, torque ripple, PMSM

Procedia PDF Downloads 148
3465 Shuffled Structure for 4.225 GHz Antireflective Plates: A Proposal Proven by Numerical Simulation

Authors: Shin-Ku Lee, Ming-Tsu Ho

Abstract:

A newly proposed antireflective selector with shuffled structure is reported in this paper. The proposed idea is made of two different quarter wavelength (QW) slabs and numerically supported by the one-dimensional simulation results provided by the method of characteristics (MOC) to function as an antireflective selector. These two QW slabs are characterized by dielectric constants εᵣA and εᵣB, uniformly divided into N and N+1 pieces respectively which are then shuffled to form an antireflective plate with B(AB)N structure such that there is always one εᵣA piece between two εᵣB pieces. Another is A(BA)N structure where every εᵣB piece is sandwiched by two εᵣA pieces. Both proposed structures are numerically proved to function as QW plates. In order to allow maximum transmission through the proposed structures, the two dielectric constants are chosen to have the relation of (εᵣA)² = εᵣB > 1. The advantages of the proposed structures over the traditional anti-reflection coating techniques are two components with two thicknesses and to shuffle to form new QW structures. The design wavelength used to validate the proposed idea is 71 mm corresponding to a frequency about 4.225 GHz. The computational results are shown in both time and frequency domains revealing that the proposed structures produce minimum reflections around the frequency of interest.

Keywords: method of characteristics, quarter wavelength, anti-reflective plate, propagation of electromagnetic fields

Procedia PDF Downloads 134
3464 Iron Oxide Nanoparticles: Synthesis, Properties, and Environmental Application

Authors: Shalini Rajput, Dinesh Mohan

Abstract:

Water is the most important and essential resources for existing of life on the earth. Water quality is gradually decreasing due to increasing urbanization and industrialization and various other developmental activities. It can pose a threat to the environment and public health therefore it is necessary to remove hazardous contaminants from wastewater prior to its discharge to the environment. Recently, magnetic iron oxide nanoparticles have been arise as significant materials due to its distinct properties. This article focuses on the synthesis method with a possible mechanism, structure and application of magnetic iron oxide nanoparticles. The various characterization techniques including X-ray diffraction, transmission electron microscopy, scanning electron microscopy with energy dispersive X-ray, Fourier transform infrared spectroscopy and vibrating sample magnetometer are useful to describe the physico-chemical properties of nanoparticles. Nanosized iron oxide particles utilized for remediation of contaminants from aqueous medium through adsorption process. Due to magnetic properties, nanoparticles can be easily separate from aqueous media. Considering the importance and emerging trend of nanotechnology, iron oxide nanoparticles as nano-adsorbent can be of great importance in the field of wastewater treatment.

Keywords: nanoparticles, adsorption, iron oxide, nanotechnology

Procedia PDF Downloads 543
3463 Facilitating Primary Care Practitioners to Improve Outcomes for People With Oropharyngeal Dysphagia Living in the Community: An Ongoing Realist Review

Authors: Caroline Smith, Professor Debi Bhattacharya, Sion Scott

Abstract:

Introduction: Oropharyngeal Dysphagia (OD) effects around 15% of older people, however it is often unrecognised and under diagnosed until they are hospitalised. There is a need for primary care healthcare practitioners (HCPs) to assume a proactive role in identifying and managing OD to prevent adverse outcomes such as aspiration pneumonia. Understanding the determinants of primary care HCPs undertaking this new behaviour provides the intervention targets for addressing. This realist review, underpinned by the Theoretical Domains Framework (TDF), aims to synthesise relevant literature and develop programme theories to understand what interventions work, how they work and under what circumstances to facilitate HCPs to prevent harm from OD. Combining realist methodology with behavioural science will permit conceptualisation of intervention components as theoretical behavioural constructs, thus informing the design of a future behaviour change intervention. Furthermore, through the TDF’s linkage to a taxonomy of behaviour change techniques, we will identify corresponding behaviour change techniques to include in this intervention. Methods & analysis: We are following the five steps for undertaking a realist review: 1) clarify the scope 2) Literature search 3) appraise and extract data 4) evidence synthesis 5) evaluation. We have searched Medline, Google scholar, PubMed, EMBASE, CINAHL, AMED, Scopus and PsycINFO databases. We are obtaining additional evidence through grey literature, snowball sampling, lateral searching and consulting the stakeholder group. Literature is being screened, evaluated and synthesised in Excel and Nvivo. We will appraise evidence in relation to its relevance and rigour. Data will be extracted and synthesised according to its relation to Initial programme theories (IPTs). IPTs were constructed after the preliminary literature search, informed by the TDF and with input from a stakeholder group of patient and public involvement advisors, general practitioners, speech and language therapists, geriatricians and pharmacists. We will follow the Realist and Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) quality and publication standards to report study results. Results: In this ongoing review our search has identified 1417 manuscripts with approximately 20% progressing to full text screening. We inductively generated 10 IPTs that hypothesise practitioners require: the knowledge to spot the signs and symptoms of OD; the skills to provide initial advice and support; and access to resources in their working environment to support them conducting these new behaviours. We mapped the 10 IPTs to 8 TDF domains and then generated a further 12 IPTs deductively using domain definitions to fulfil the remaining 6 TDF domains. Deductively generated IPTs broadened our thinking to consider domains such as ‘Emotion,’ ‘Optimism’ and ‘Social Influence’, e.g. If practitioners perceive that patients, carers and relatives expect initial advice and support, then they will be more likely to provide this, because they will feel obligated to do so. After prioritisation with stakeholders using a modified nominal group technique approach, a maximum of 10 IPTs will progress to test against the literature.

Keywords: behaviour change, deglutition disorders, primary healthcare, realist review

Procedia PDF Downloads 75
3462 Evaluating Factors Influencing Information Quality in Large Firms

Authors: B. E. Narkhede, S. K. Mahajan, B. T. Patil, R. D. Raut

Abstract:

Information quality is a major performance measure for an Enterprise Resource Planning (ERP) system of any firm. This study identifies various critical success factors of information quality. The effect of various critical success factors like project management, reengineering efforts and interdepartmental communications on information quality is analyzed using a multiple regression model. Here quantitative data are collected from respondents from various firms through structured questionnaire for assessment of the information quality, project management, reengineering efforts and interdepartmental communications. The validity and reliability of the data are ensured using techniques like factor analysis, computing of Cronbach’s alpha. This study gives relative importance of each of the critical success factors. The findings suggest that among the various factors influencing information quality careful reengineering efforts are the most influencing factor. This paper gives clear insight to managers and practitioners regarding the relative importance of critical success factors influencing information quality so that they can formulate a strategy at the beginning of ERP system implementation.

Keywords: Enterprise Resource Planning (ERP), information systems (IS), multiple regression, information quality

Procedia PDF Downloads 312
3461 The Role of Piceatannol in Counteracting Glyceraldehyde-3-Phosphate Dehydrogenase Aggregation and Nuclear Translocation

Authors: Joanna Gerszon, Aleksandra Rodacka

Abstract:

In the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease, protein and peptide aggregation processes play a vital role in contributing to the formation of intracellular and extracellular protein deposits. One of the major components of these deposits is the oxidatively modified glyceraldehyde-3-phosphate dehydrogenase (GAPDH). Therefore, the purpose of this research was to answer the question whether piceatannol, a stilbene derivative, counteracts and/or slows down oxidative stress-induced GAPDH aggregation. The study also aimed to determine if this natural occurring compound prevents unfavorable nuclear translocation of GAPDH in hippocampal cells. The isothermal titration calorimetry (ITC) analysis indicated that one molecule of GAPDH can bind up to 8 molecules of piceatannol (7.3 ± 0.9). As a consequence of piceatannol binding to the enzyme, the loss of activity was observed. Parallel with GAPDH inactivation the changes in zeta potential, and loss of free thiol groups were noted. Nevertheless, the ligand-protein binding does not influence the secondary structure of the GAPDH. Precise molecular docking analysis of the interactions inside the active center allowed to presume that these effects are due to piceatannol ability to assemble a covalent binding with nucleophilic cysteine residue (Cys149) which is directly involved in the catalytic reaction. Molecular docking also showed that simultaneously 11 molecules of ligand can be bound to dehydrogenase. Taking into consideration obtained data, the influence of piceatannol on level of GAPDH aggregation induced by excessive oxidative stress was examined. The applied methods (thioflavin-T binding-dependent fluorescence as well as microscopy methods - transmission electron microscopy, Congo Red staining) revealed that piceatannol significantly diminishes level of GAPDH aggregation. Finally, studies involving cellular model (Western blot analyses of nuclear and cytosolic fractions and confocal microscopy) indicated that piceatannol-GAPDH binding prevents GAPDH from nuclear translocation induced by excessive oxidative stress in hippocampal cells. In consequence, it counteracts cell apoptosis. These studies demonstrate that by binding with GAPDH, piceatannol blocks cysteine residue and counteracts its oxidative modifications, that induce oligomerization and GAPDH aggregation as well as it prevents hippocampal cells from apoptosis by retaining GAPDH in the cytoplasm. All these findings provide a new insight into the role of piceatannol interaction with GAPDH and present a potential therapeutic strategy for some neurological disorders related to GAPDH aggregation. This work was supported by the by National Science Centre, Poland (grant number 2017/25/N/NZ1/02849).

Keywords: glyceraldehyde-3-phosphate dehydrogenase, neurodegenerative disease, neuroprotection, piceatannol, protein aggregation

Procedia PDF Downloads 149
3460 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 61
3459 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models

Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand

Abstract:

Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models on two different realworld electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.

Keywords: EHR, machine learning, imputation, laboratory variables, algorithmic bias

Procedia PDF Downloads 68
3458 Hyperspectral Band Selection for Oil Spill Detection Using Deep Neural Network

Authors: Asmau Mukhtar Ahmed, Olga Duran

Abstract:

Hydrocarbon (HC) spills constitute a significant problem that causes great concern to the environment. With the latest technology (hyperspectral images) and state of the earth techniques (image processing tools), hydrocarbon spills can easily be detected at an early stage to mitigate the effects caused by such menace. In this study; a controlled laboratory experiment was used, and clay soil was mixed and homogenized with different hydrocarbon types (diesel, bio-diesel, and petrol). The different mixtures were scanned with HYSPEX hyperspectral camera under constant illumination to generate the hypersectral datasets used for this experiment. So far, the Short Wave Infrared Region (SWIR) has been exploited in detecting HC spills with excellent accuracy. However, the Near-Infrared Region (NIR) is somewhat unexplored with regards to HC contamination and how it affects the spectrum of soils. In this study, Deep Neural Network (DNN) was applied to the controlled datasets to detect and quantify the amount of HC spills in soils in the Near-Infrared Region. The initial results are extremely encouraging because it indicates that the DNN was able to identify features of HC in the Near-Infrared Region with a good level of accuracy.

Keywords: hydrocarbon, Deep Neural Network, short wave infrared region, near-infrared region, hyperspectral image

Procedia PDF Downloads 98
3457 Cyber Bullying Victimization of Elementary School Students and Their Reflections on the Victimization

Authors: Merve Sadetas Sezer, Ismail Sahin, Ahmet Oguz Akturk

Abstract:

With the use of developing technology, mostly in communication and entertainment, students spend considerable time on the internet. In addition to the advantages provided by the internet, social isolation brings problems such as addiction. This is one of the problems of the virtual violence. Cyber-bullying is the common name of the intensities which students are exposed on the internet. The purpose of this study designed as a qualitative research is to find out the cyber bullying varieties and its effects on elementary school students. The participants of this research are 6th, 7th and 8th grade students of a primary school and 24 students agreed to participate in the study. The students were asked to fill an interview with semi-structured open-ended questions. According to the results obtained in the research, the most important statements determined by the participants are breaking passwords on social networking sites, slang insult to blasphemy and taking friendship offers from unfamiliar people. According to participants from the research, the most used techniques to prevent themselves from cyber bullying are to complain to the site administrator, closing accounts on social networking sites and countercharging. Also, suggestions were presented according to the findings.

Keywords: bullying, cyber-bullying, elementary, peer-relationship, virtual victimization

Procedia PDF Downloads 334
3456 The Role of Named Entity Recognition for Information Extraction

Authors: Girma Yohannis Bade, Olga Kolesnikova, Grigori Sidorov

Abstract:

Named entity recognition (NER) is a building block for information extraction. Though the information extraction process has been automated using a variety of techniques to find and extract a piece of relevant information from unstructured documents, the discovery of targeted knowledge still poses a number of research difficulties because of the variability and lack of structure in Web data. NER, a subtask of information extraction (IE), came to exist to smooth such difficulty. It deals with finding the proper names (named entities), such as the name of the person, country, location, organization, dates, and event in a document, and categorizing them as predetermined labels, which is an initial step in IE tasks. This survey paper presents the roles and importance of NER to IE from the perspective of different algorithms and application area domains. Thus, this paper well summarizes how researchers implemented NER in particular application areas like finance, medicine, defense, business, food science, archeology, and so on. It also outlines the three types of sequence labeling algorithms for NER such as feature-based, neural network-based, and rule-based. Finally, the state-of-the-art and evaluation metrics of NER were presented.

Keywords: the role of NER, named entity recognition, information extraction, sequence labeling algorithms, named entity application area

Procedia PDF Downloads 65
3455 A Multi-Objective Methodology for Selecting Lean Initiatives in Modular Construction Companies

Authors: Saba Shams Bidhendi, Steven Goh, Andrew Wandel

Abstract:

The implementation of lean manufacturing initiatives has produced significant impacts in improving operational performance and reducing manufacturing wastes in the production process. However, selecting an appropriate set of lean strategies is critical to avoid misapplication of the lean manufacturing techniques and consequential increase in non-value-adding activities. To the author’s best knowledge, there is currently no methodology to select lean strategies that considers their impacts on manufacturing wastes and performance metrics simultaneously. In this research, a multi-objective methodology is proposed that suggests an appropriate set of lean initiatives based on their impacts on performance metrics and manufacturing wastes and within manufacturers’ resource limitation. The proposed methodology in this research suggests the best set of lean initiatives for implementation that have highest impacts on identified critical performance metrics and manufacturing wastes. Therefore, manufacturers can assure that implementing suggested lean tools improves their production performance and reduces manufacturing wastes at the same time. A case study was conducted to show the effectiveness and validate the proposed model and methodologies.

Keywords: lean manufacturing, lean strategies, manufacturing wastes, manufacturing performance, optimisation, decision making

Procedia PDF Downloads 176