Search results for: learning management systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22248

Search results for: learning management systems

13368 Audio-Lingual Method and the English-Speaking Proficiency of Grade 11 Students

Authors: Marthadale Acibo Semacio

Abstract:

Speaking skill is a crucial part of English language teaching and learning. This actually shows the great importance of this skill in English language classes. Through speaking, ideas and thoughts are shared with other people, and a smooth interaction between people takes place. The study examined the levels of speaking proficiency of the control and experimental groups on pronunciation, grammatical accuracy, and fluency. As a quasi-experimental study, it also determined the presence or absence of significant changes in their speaking proficiency levels in terms of pronouncing the words correctly, the accuracy of grammar and fluency of a language given the two methods to the groups of students in the English language, using the traditional and audio-lingual methods. Descriptive and inferential statistics were employed according to the stated specific problems. The study employed a video presentation with prior information about it. In the video, the teacher acts as model one, giving instructions on what is going to be done, and then the students will perform the activity. The students were paired purposively based on their learning capabilities. Observing proper ethics, their performance was audio recorded to help the researcher assess the learner using the modified speaking rubric. The study revealed that those under the traditional method were more fluent than those in the audio-lingual method. With respect to the way in which each method deals with the feelings of the student, the audio-lingual one fails to provide a principle that would relate to this area and follows the assumption that the intrinsic motivation of the students to learn the target language will spring from their interest in the structure of the language. However, the speaking proficiency levels of the students were remarkably reinforced in reading different words through the aid of aural media with their teachers. The study concluded that using an audio-lingual method of teaching is not a stand-alone method but only an aid of the teacher in helping the students improve their speaking proficiency in the English Language. Hence, audio-lingual approach is encouraged to be used in teaching English language, on top of the chalk-talk or traditional method, to improve the speaking proficiency of students.

Keywords: audio-lingual, speaking, grammar, pronunciation, accuracy, fluency, proficiency

Procedia PDF Downloads 51
13367 Effect of Surfactant Level of Microemulsions and Nanoemulsions on Cell Viability

Authors: Sonal Gupta, Rakhi Bansal, Javed Ali, Reema Gabrani, Shweta Dang

Abstract:

Nanoemulsions (NEs) and microemulsions (MEs) have been an attractive tool for encapsulation of both hydrophilic and lipophillic actives. Both these systems are composed of oil phase, surfactant, co-surfactant and aqueous phase. Depending upon the application and intended use, both oil-in-water and water-in-oil emulsions can be designed. NEs are fabricated using high energy methods employing less percentage of surfactant as compared to MEs which are self assembled drug delivery systems. Owing to the nanometric size of the droplets these systems have been widely used to enhance solubility and bioavailability of natural as well as synthetic molecules. The aim of the present study is to assess the effect of % age of surfactants on cell viability of Vero cells (African Green Monkeys’ Kidney epithelial cells) via MTT assay. Green tea catechin (Polyphenon 60) loaded ME employing low energy vortexing and NE employing high energy ultrasonication were prepared using same excipients (labrasol as oil, cremophor EL as surfactant and glycerol as co-surfactant) however, the % age of oil and surfactant needed to prepare the ME was higher as compared to NE. These formulations along with their excipients (oilME=13.3%, SmixME=26.67%; oilNE=10%, SmixNE=13.52%) were added to Vero cells for 24 hrs. The tetrazolium dye, 3-(4,5-dimethylthia/ol-2-yl)-2,5-diphi-iiyltclrazolium bromide (MTT), is reduced by live cells and this reaction is used as the end point to evaluate the cytoxicity level of a test formulation. Results of MTT assay indicated that oil at different percentages exhibited almost equal cell viability (oilME ≅ oilNE) while surfactant mixture had a significant difference in the cell viability values (SmixME < SmixNE). Polyphenon 60 loaded ME and its PlaceboME showed higher toxicity as compared to Polyphenon 60 loaded NE and its PlaceboNE that can be attributed to the higher concentration of surfactants present in MEs. Another probable reason for high % cell viability of Polyphenon 60 loaded NE might be due to the effective release of Polyphenon 60 from NE formulation that helps in the sustenance of Vero cells.

Keywords: cell viability, microemulsion, MTT, nanoemulsion, surfactants, ultrasonication

Procedia PDF Downloads 411
13366 Study on Errors in Estimating the 3D Gaze Point for Different Pupil Sizes Using Eye Vergences

Authors: M. Pomianek, M. Piszczek, M. Maciejewski

Abstract:

The binocular eye tracking technology is increasingly being used in industry, entertainment and marketing analysis. In the case of virtual reality, eye tracking systems are already the basis for user interaction with the environment. In such systems, the high accuracy of determining the user's eye fixation point is very important due to the specificity of the virtual reality head-mounted display (HMD). Often, however, there are unknown errors occurring in the used eye tracking technology, as well as those resulting from the positioning of the devices in relation to the user's eyes. However, can the virtual environment itself influence estimation errors? The paper presents mathematical analyses and empirical studies of the determination of the fixation point and errors resulting from the change in the size of the pupil in response to the intensity of the displayed scene. The article contains both static laboratory tests as well as on the real user. Based on the research results, optimization solutions were proposed that would reduce the errors of gaze estimation errors. Studies show that errors in estimating the fixation point of vision can be minimized both by improving the pupil positioning algorithm in the video image and by using more precise methods to calibrate the eye tracking system in three-dimensional space.

Keywords: eye tracking, fixation point, pupil size, virtual reality

Procedia PDF Downloads 117
13365 Land Suitability Analysis for Rice Production in a Typical Watershed of Southwestern Nigeria: A Sustainability Pathway

Authors: Oluwagbenga O. Isaac Orimoogunje, Omolola Helen Oshosanya

Abstract:

The study examined land management in a typical watershed in southwestern Nigeria with a view to ascertaining its impact on land suitability analysis for rice cultivation and production. The study applied the analytical hierarchy process (AHP), weighted overlay analysis (WOA), multi-criteria decision-making techniques, and suitability map calculations within a Geographic Information System environment. Five main criteria were used, and these include climate, topography, soil fertility, macronutrients, and micronutrients. A consistency ratio (CR) of 0.067 was obtained for rice cultivation. The results showed that 95% of the land area is suitable for rice cultivation, with pH units ranging between 4.6 and 6.0, organic matter of 1.4–2.5 g kg-1 and base saturation of more than 80%. The study concluded that the Ofiki watershed is a potential site for large-scale rice cultivation in a sustainable capacity.

Keywords: land management, land characteristics, land suitability, rice production, watershed

Procedia PDF Downloads 61
13364 Research on Measuring Operational Risk in Commercial Banks Based on Internal Control

Authors: Baobao Li

Abstract:

Operational risk covers all operations of commercial banks and has a close relationship with the bank’s internal control. But in the commercial banks' management practice, internal control is always separated from the operational risk measurement. With the increasing of operational risk events in recent years, operational risk is paid more and more attention by regulators and banks’ managements. The paper first discussed the relationship between internal control and operational risk management and used CVaR-POT model to measure operational risk, and then put forward a modified measurement method (to use operational risk assessment results to modify the measurement results of the CVaR-POT model). The paper also analyzed the necessity and rationality of this method. The method takes into consideration the influence of internal control, improves the accuracy and effectiveness of operational risk measurement and save the economic capital for commercial banks, avoiding the drawbacks of using some mainstream models one-sidedly.

Keywords: commercial banks, internal control, operational risk, risk measurement

Procedia PDF Downloads 379
13363 Techno Economic Analysis of CAES Systems Integrated into Gas-Steam Combined Plants

Authors: Coriolano Salvini

Abstract:

The increasing utilization of renewable energy sources for electric power production calls for the introduction of energy storage systems to match the electric demand along the time. Although many countries are pursuing as a final goal a “decarbonized” electrical system, in the next decades the traditional fossil fuel fed power plant still will play a relevant role in fulfilling the electric demand. Presently, such plants provide grid ancillary services (frequency control, grid balance, reserve, etc.) by adapting the output power to the grid requirements. An interesting option is represented by the possibility to use traditional plants to improve the grid storage capabilities. The present paper is addressed to small-medium size systems suited for distributed energy storage. The proposed Energy Storage System (ESS) is based on a Compressed Air Energy Storage (CAES) integrated into a Gas-Steam Combined Cycle (GSCC) or a Gas Turbine based CHP plants. The systems can be incorporated in an ex novo built plant or added to an already existing one. To avoid any geological restriction related to the availability of natural compressed air reservoirs, artificial storage is addressed. During the charging phase, electric power is absorbed from the grid by an electric driven intercooled/aftercooled compressor. In the course of the discharge phase, the compressed stored air is sent to a heat transfer device fed by hot gas taken upstream the Heat Recovery Steam Generator (HRSG) and subsequently expanded for power production. To maximize the output power, a staged reheated expansion process is adopted. The specific power production related to the kilogram per second of exhaust gas used to heat the stored air is two/three times larger than that achieved if the gas were used to produce steam in the HRSG. As a result, a relevant power augmentation is attained with respect to normal GSCC plant operations without additional use of fuel. Therefore, the excess of output power can be considered “fuel free” and the storage system can be compared to “pure” ESSs such as electrochemical, pumped hydro or adiabatic CAES. Representative cases featured by different power absorption, production capability, and storage capacity have been taken into consideration. For each case, a technical optimization aimed at maximizing the storage efficiency has been carried out. On the basis of the resulting storage pressure and volume, number of compression and expansion stages, air heater arrangement and process quantities found for each case, a cost estimation of the storage systems has been performed. Storage efficiencies from 0.6 to 0.7 have been assessed. Capital costs in the range of 400-800 €/kW and 500-1000 €/kWh have been estimated. Such figures are similar or lower to those featuring alternative storage technologies.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), gas steam combined cycle (GSCC), techno-economic analysis

Procedia PDF Downloads 198
13362 State and Benefit: Delivering the First State of the Bays Report for Victoria

Authors: Scott Rawlings

Abstract:

Victoria’s first State of the Bays report is an historic baseline study of the health of Port Phillip Bay and Western Port. The report includes 50 assessments of 36 indicators across a broad array of topics from the nitrogen cycle and water quality to key marine species and habitats. This paper discusses the processes for determining and assessing the indicators and comments on future priorities identified to maintain and improve the health of these water ways. Victoria’s population is now at six million, and growing at a rate of over 100,000 people per year - the highest increase in Australia – and the population of greater Melbourne is over four million. Port Phillip Bay and Western Port are vital marine assets at the centre of this growth and will require adaptive strategies if they are to remain in good condition and continue to deliver environmental, economic and social benefits. In 2014, it was in recognition of these pressures that the incoming Victorian Government committed to reporting on the state of the bays every five years. The inaugural State of the Bays report was issued by the independent Victorian Commissioner for Environmental Sustainability. The report brought together what is known about both bays, based on existing research. It was a baseline on which future reports will build and, over time, include more of Victoria’s marine environment. Port Phillip Bay and Western Port generally demonstrate healthy systems. Specific threats linked to population growth are a significant pressure. Impacts are more significant where human activity is more intense and where nutrients are transported to the bays around the mouths of creeks and drainage systems. The transport of high loads of nutrients and pollutants to the bays from peak rainfall events is likely to increase with climate change – as will sea level rise. Marine pests are also a threat. More than 100 introduced marine species have become established in Port Phillip Bay and can compete with native species, alter habitat, reduce important fish stocks and potentially disrupt nitrogen cycling processes. This study confirmed that our data collection regime is better within the Marine Protected Areas of Port Phillip Bay than in other parts. The State of the Bays report is a positive and practical example of what can be achieved through collaboration and cooperation between environmental reporters, Government agencies, academic institutions, data custodians, and NGOs. The State of the Bays 2016 provides an important foundation by identifying knowledge gaps and research priorities for future studies and reports on the bays. It builds a strong evidence base to effectively manage the bays and support an adaptive management framework. The Report proposes a set of indicators for future reporting that will support a step-change in our approach to monitoring and managing the bays – a shift from reporting only on what we do know, to reporting on what we need to know.

Keywords: coastal science, marine science, Port Phillip Bay, state of the environment, Western Port

Procedia PDF Downloads 196
13361 Improving the Constructability of Highway Design Plans

Authors: R. Edward Minchin Jr.

Abstract:

The U.S. Federal Highway Administration (FHWA) Every Day Counts Program (EDC) has resulted in state DOTs putting evermore emphasis on speeding up the delivery of highway and bridge construction projects for use by the driving public. This has resulted in an increase in the use of alternative construction delivery systems such as design-build (D-B), construction manager at-risk (CMR) or construction manager/general contractor (CM/GC), and adding alternative technical concepts (ATCs) to traditional design-bid-build (DBB) contracts. ATCs have exhibited great potential for delivering substantial benefits like cost savings, increased constructability, and quicker project delivery. Previous research has found that knowledge of project constructability was lacking in state Department of Transportation (DOT) planning, programming, and environmental staffs. Many agencies have therefore relied on a set of ‘acceptable’ design solutions over the years of working with their local resource agencies. The result is that the permitting process for several government agencies has become increasingly restrictive with the result that the DOTs and their industry partners lose the ability to innovate after a permit is approved. The intent of this paper is to report on the research team’s progress in this ongoing effort furnish the United States government with a uniform set of guidelines for the application of constructability reviews during all phases of project development and delivery. The research uses surveys and interviews to determine which states have implemented formal programs to ensure that the constructor is furnished with a set of contract documents that affords said constructor with the best possible opportunity to successfully construct the project with the highest quality standards, within the contract duration and without exceeding the construction budget. Once these states are identified, workshops are held all over the nation, resulting in the team learning the best current practices and giving the team the ability to recommend new practices that will improve the process. The plan is for the FHWA to encourage or require state DOTs to use these practices on all federally funded highway and bridge construction projects. The project deliverable is a Guidebook for FHWA to use in disseminating the recommended practices to the states.

Keywords: alternative construction delivery, alternative technical concepts, constructability, construction design plans

Procedia PDF Downloads 193
13360 Optimization of Oxygen Plant Parameters Simulating with MATLAB

Authors: B. J. Sonani, J. K. Ratnadhariya, Srinivas Palanki

Abstract:

Cryogenic engineering is the fast growing branch of the modern technology. There are various applications of the cryogenic engineering such as liquefaction in gas industries, metal industries, medical science, space technology, and transportation. The low-temperature technology developed superconducting materials which lead to reduce the friction and wear in various components of the systems. The liquid oxygen, hydrogen and helium play vital role in space application. The liquefaction process is produced very low temperature liquid for various application in research and modern application. The air liquefaction system for oxygen plants in gas industries is based on the Claude cycle. The effect of process parameters on the overall system is difficult to be analysed by manual calculations, and this provides the motivation to use process simulators for understanding the steady state and dynamic behaviour of such systems. The parametric study of this system via MATLAB simulations provide useful guidelines for preliminary design of air liquefaction system based on the Claude cycle. Every organization is always trying for reduce the cost and using the optimum performance of the plant for the staying in the competitive market.

Keywords: cryogenic, liquefaction, low -temperature, oxygen, claude cycle, optimization, MATLAB

Procedia PDF Downloads 307
13359 Applying the Crystal Model to Different Nuclear Systems

Authors: A. Amar

Abstract:

The angular distributions of the nuclear systems under consideration have been analyzed in the framework of the optical model (OM), where the real part was taken in the crystal model form. A crystal model (CM) has been applied to deuteron elastically scattered by ⁶,⁷Li and ⁹Be. A crystal model (CM) + distorted-wave Born approximation (DWBA) + dynamic polarization potential (DPP) potential has been applied to deuteron elastically scattered by ⁶,⁷Li and 9Be. Also, a crystal model has been applied to ⁶Li elastically scattered by ¹⁶O and ²⁸Sn in addition to the ⁷Li+⁷Li system and the ¹²C(alpha,⁸Be) ⁸Be reaction. The continuum-discretized coupled-channels (CDCC) method has been applied to the ⁷Li+⁷Li system and agreement between the crystal model and the continuum-discretized coupled-channels (CDCC) method has been observed. In general, the models succeeded in reproducing the differential cross sections at the full angular range and for all the energies under consideration.

Keywords: optical model (OM), crystal model (CM), distorted-wave born approximation (DWBA), dynamic polarization potential (DPP), the continuum-discretized coupled-channels (CDCC) method, and deuteron elastically scattered by ⁶, ⁷Li and ⁹Be

Procedia PDF Downloads 58
13358 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 112
13357 Optimizing Groundwater Pumping for a Complex Groundwater/Surface Water System

Authors: Emery A. Coppola Jr., Suna Cinar, Ferenc Szidarovszky

Abstract:

Over-pumping of groundwater resources is a serious problem world-wide. In addition to depleting this valuable resource, hydraulically connected sensitive ecological resources like wetlands and surface water bodies are often impacted and even destroyed by over-pumping. Effectively managing groundwater in a way that satisfy human demand while preserving natural resources is a daunting challenge that will only worsen with growing human populations and climate change. As presented in this paper, a numerical flow model developed for a hypothetical but realistic groundwater/surface water system was combined with formal optimization. Response coefficients were used in an optimization management model to maximize groundwater pumping in a complex, multi-layered aquifer system while protecting against groundwater over-draft, streamflow depletion, and wetland impacts. Pumping optimization was performed for different constraint sets that reflect different resource protection preferences, yielding significantly different optimal pumping solutions. A sensitivity analysis on the optimal solutions was performed on select response coefficients to identify differences between wet and dry periods. Stochastic optimization was also performed, where uncertainty associated with changing irrigation demand due to changing weather conditions are accounted for. One of the strengths of this optimization approach is that it can efficiently and accurately identify superior management strategies that minimize risk and adverse environmental impacts associated with groundwater pumping under different hydrologic conditions.

Keywords: numerical groundwater flow modeling, water management optimization, groundwater overdraft, streamflow depletion

Procedia PDF Downloads 217
13356 An Evaluation of a First Year Introductory Statistics Course at a University in Jamaica

Authors: Ayesha M. Facey

Abstract:

The evaluation sought to determine the factors associated with the high failure rate among students taking a first-year introductory statistics course. By utilizing Tyler’s Objective Based Model, the main objectives were: to assess the effectiveness of the lecturer’s teaching strategies; to determine the proportion of students who attends lectures and tutorials frequently and to determine the impact of infrequent attendance on performance; to determine how the assigned activities assisted in students understanding of the course content; to ascertain the possible issues being faced by students in understanding the course material and obtain possible solutions to the challenges and to determine whether the learning outcomes have been achieved based on an assessment of the second in-course examination. A quantitative survey research strategy was employed and the study population was students enrolled in semester one of the academic year 2015/2016. A convenience sampling approach was employed resulting in a sample of 98 students. Primary data was collected using self-administered questionnaires over a one-week period. Secondary data was obtained from the results of the second in-course examination. Data were entered and analyzed in SPSS version 22 and both univariate and bivariate analyses were conducted on the information obtained from the questionnaires. Univariate analyses provided description of the sample through means, standard deviations and percentages while bivariate analyses were done using Spearman’s Rho correlation coefficient and Chi-square analyses. For secondary data, an item analysis was performed to obtain the reliability of the examination questions, difficulty index and discriminant index. The examination results also provided information on the weak areas of the students and highlighted the learning outcomes that were not achieved. Findings revealed that students were more likely to participate in lectures than tutorials and that attendance was high for both lectures and tutorials. There was a significant relationship between participation in lectures and performance on examination. However, a high proportion of students has been absent from three or more tutorials as well as lectures. A higher proportion of students indicated that they completed the assignments obtained from the lectures sometimes while they rarely completed tutorial worksheets. Students who were more likely to complete their assignments were significantly more likely to perform well on their examination. Additionally, students faced a number of challenges in understanding the course content and the topics of probability, binomial distribution and normal distribution were the most challenging. The item analysis also highlighted these topics as problem areas. Problems doing mathematics and application and analyses were their major challenges faced by students and most students indicated that some of the challenges could be alleviated if additional examples were worked in lectures and they were given more time to solve questions. Analysis of the examination results showed that a number of learning outcomes were not achieved for a number of topics. Based on the findings recommendations were made that suggested adjustments to grade allocations, delivery of lectures and methods of assessment.

Keywords: evaluation, item analysis, Tyler’s objective based model, university statistics

Procedia PDF Downloads 175
13355 Managing Psychogenic Non-Epileptic Seizure Disorder: The Benefits of Collaboration between Psychiatry and Neurology

Authors: Donald Kushon, Jyoti Pillai

Abstract:

Psychogenic Non-epileptic Seizure Disorder (PNES) is a challenging clinical problem for the neurologist. This study explores the benefits of on-site collaboration between psychiatry and neurology in the management of PNES. A 3 month period at a university hospital seizure clinic is described detailing specific management approaches taken as a result of this collaboration. This study describes four areas of interest: (1. After the video EEG results confirm the diagnosis of PNES, the presentation of the diagnosis of PNES to the patient. (2. The identification of co-morbid psychiatric illness (3. Treatment with specific psychotherapeutic interventions (including Cognitive Behavioral Therapy) and psychopharmacologic interventions (primarily SSRIs) and (4. Preliminary treatment outcomes.

Keywords: cognitive behavioral therapy (CBT), psychogenic non-epileptic seizure disorder (PNES), selective serotonin reuptake inhibitors (SSRIs), video electroencephalogram (VEEG)

Procedia PDF Downloads 299
13354 Drinking Water Quality Assessment Using Fuzzy Inference System Method: A Case Study of Rome, Italy

Authors: Yas Barzegar, Atrin Barzegar

Abstract:

Drinking water quality assessment is a major issue today; technology and practices are continuously improving; Artificial Intelligence (AI) methods prove their efficiency in this domain. The current research seeks a hierarchical fuzzy model for predicting drinking water quality in Rome (Italy). The Mamdani fuzzy inference system (FIS) is applied with different defuzzification methods. The Proposed Model includes three fuzzy intermediate models and one fuzzy final model. Each fuzzy model consists of three input parameters and 27 fuzzy rules. The model is developed for water quality assessment with a dataset considering nine parameters (Alkalinity, Hardness, pH, Ca, Mg, Fluoride, Sulphate, Nitrates, and Iron). Fuzzy-logic-based methods have been demonstrated to be appropriate to address uncertainty and subjectivity in drinking water quality assessment; it is an effective method for managing complicated, uncertain water systems and predicting drinking water quality. The FIS method can provide an effective solution to complex systems; this method can be modified easily to improve performance.

Keywords: water quality, fuzzy logic, smart cities, water attribute, fuzzy inference system, membership function

Procedia PDF Downloads 58
13353 Addressing Microbial Contamination in East Hararghe, Oromia, Ethiopia: Improving Water Sanitation Infrastructure and Promoting Safe Water Practices for Enhanced Food Safety

Authors: Tuji Jemal Ahmed, Hussen Beker Yusuf

Abstract:

Food safety is a major concern worldwide, with microbial contamination being one of the leading causes of foodborne illnesses. In Ethiopia, drinking water and untreated groundwater are a primary source of microbial contamination, leading to significant health risks. East Hararghe, Oromia, is one of the regions in Ethiopia that has been affected by this problem. This paper provides an overview of the impact of untreated groundwater on human health in Haramaya Rural District, East Hararghe and highlights the urgent need for sustained efforts to address the water sanitation supply problem. The use of untreated groundwater for drinking and household purposes in Haramaya Rural District, East Hararghe is prevalent, leading to high rates of waterborne illnesses such as diarrhea, typhoid fever, and cholera. The impact of these illnesses on human health is significant, resulting in significant morbidity and mortality, especially among vulnerable populations such as children and the elderly. In addition to the direct health impacts, waterborne illnesses also have indirect impacts on human health, such as reduced productivity and increased healthcare costs. Groundwater sources are susceptible to microbial contamination due to the infiltration of surface water, human and animal waste, and agricultural runoff. In Haramaya Rural District, East Hararghe, poor water management practices, inadequate sanitation facilities, and limited access to clean water sources contribute to the prevalence of untreated groundwater as a primary source of drinking water. These underlying causes of microbial contamination highlight the need for improved water sanitation infrastructure, including better access to safe drinking water sources and the implementation of effective treatment methods. The paper emphasizes the need for regular water quality monitoring, especially for untreated groundwater sources, to ensure safe drinking water for the population. The implementation of effective preventive measures, such as the use of effective disinfectants, proper waste disposal methods, and regular water quality monitoring, is crucial to reducing the risk of contamination and improving public health outcomes in the region. Community education and awareness-raising campaigns can also play a critical role in promoting safe water practices and reducing the risk of contamination. These campaigns can include educating the population on the importance of boiling water before drinking, the use of water filters, and proper sanitation practices. In conclusion, the use of untreated groundwater as a primary source of drinking water in East Hararghe, Oromia, Ethiopia, has significant impacts on human health, leading to widespread waterborne illnesses and posing a significant threat to public health. Sustained efforts are urgently needed to address the root causes of contamination, such as poor sanitation and hygiene practices, improper waste management, and the water sanitation supply problem, including the implementation of effective preventive measures and community-based education programs, ultimately improving public health outcomes in the region. A comprehensive approach that involves community-based water management systems, point-of-use water treatment methods, and awareness-raising campaigns can contribute to reducing the incidence of microbial contamination in the region.

Keywords: food safety, health risks, microbial contamination, untreated groundwater

Procedia PDF Downloads 82
13352 Proteomic Analysis of Excretory Secretory Antigen (ESA) from Entamoeba histolytica HM1: IMSS

Authors: N. Othman, J. Ujang, M. N. Ismail, R. Noordin, B. H. Lim

Abstract:

Amoebiasis is caused by the Entamoeba histolytica and still endemic in many parts of the tropical region, worldwide. Currently, there is no available vaccine against amoebiasis. Hence, there is an urgent need to develop a vaccine. The excretory secretory antigen (ESA) of E. histolytica is a suitable biomarker for the vaccine candidate since it can modulate the host immune response. Hence, the objective of this study is to identify the proteome of the ESA towards finding suitable biomarker for the vaccine candidate. The non-gel based and gel-based proteomics analyses were performed to identify proteins. Two kinds of mass spectrometry with different ionization systems were utilized i.e. LC-MS/MS (ESI) and MALDI-TOF/TOF. Then, the functional proteins classification analysis was performed using PANTHER software. Combination of the LC -MS/MS for the non-gel based and MALDI-TOF/TOF for the gel-based approaches identified a total of 273 proteins from the ESA. Both systems identified 29 similar proteins whereby 239 and 5 more proteins were identified by LC-MS/MS and MALDI-TOF/TOF, respectively. Functional classification analysis showed the majority of proteins involved in the metabolic process (24%), primary metabolic process (19%) and protein metabolic process (10%). Thus, this study has revealed the proteome the E. histolytica ESA and the identified proteins merit further investigations as a vaccine candidate.

Keywords: E. histolytica, ESA, proteomics, biomarker

Procedia PDF Downloads 321
13351 A Novel Way to Create Qudit Quantum Error Correction Codes

Authors: Arun Moorthy

Abstract:

Quantum computing promises to provide algorithmic speedups for a number of tasks; however, similar to classical computing, effective error-correcting codes are needed. Current quantum computers require costly equipment to control each particle, so having fewer particles to control is ideal. Although traditional quantum computers are built using qubits (2-level systems), qudits (more than 2-levels) are appealing since they can have an equivalent computational space using fewer particles, meaning fewer particles need to be controlled. Currently, qudit quantum error-correction codes are available for different level qudit systems; however, these codes have sometimes overly specific constraints. When building a qudit system, it is important for researchers to have access to many codes to satisfy their requirements. This project addresses two methods to increase the number of quantum error correcting codes available to researchers. The first method is generating new codes for a given set of parameters. The second method is generating new error-correction codes by using existing codes as a starting point to generate codes for another level (i.e., a 5-level system code on a 2-level system). So, this project builds a website that researchers can use to generate new error-correction codes or codes based on existing codes.

Keywords: qudit, error correction, quantum, qubit

Procedia PDF Downloads 140
13350 Effective Planning of Public Transportation Systems: A Decision Support Application

Authors: Ferdi Sönmez, Nihal Yorulmaz

Abstract:

Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.

Keywords: operator cost, bi-level optimization model, user cost, urban transportation

Procedia PDF Downloads 223
13349 Power Control in Solar Battery Charging Station Using Fuzzy Decision Support System

Authors: Krishnan Manickavasagam, Manikandan Shanmugam

Abstract:

Clean and abundant renewable energy sources (RES) such as solar energy is seen as the best solution to replace conventional energy source. Unpredictable power generation is a major issue in the penetration of solar energy, as power generated is governed by the irradiance received. Controlling the power generated from solar PV (SPV) panels to battery and load is a challenging task. In this paper, power flow control from SPV to load and energy storage device (ESD) is controlled by a fuzzy decision support system (FDSS) on the availability of solar irradiation. The results show that FDSS implemented with the energy management system (EMS) is capable of managing power within the area, and if excess power is available, then shared with the neighboring area.

Keywords: renewable energy sources, fuzzy decision support system, solar photovoltaic, energy storage device, energy management system

Procedia PDF Downloads 86
13348 Portable System for the Acquisition and Processing of Electrocardiographic Signals to Obtain Different Metrics of Heart Rate Variability

Authors: Daniel F. Bohorquez, Luis M. Agudelo, Henry H. León

Abstract:

Heart rate variability (HRV) is defined as the temporary variation between heartbeats or RR intervals (distance between R waves in an electrocardiographic signal). This distance is currently a recognized biomarker. With the analysis of the distance, it is possible to assess the sympathetic and parasympathetic nervous systems. These systems are responsible for the regulation of the cardiac muscle. The analysis allows health specialists and researchers to diagnose various pathologies based on this variation. For the acquisition and analysis of HRV taken from a cardiac electrical signal, electronic equipment and analysis software that work independently are currently used. This complicates and delays the process of interpretation and diagnosis. With this delay, the health condition of patients can be put at greater risk. This can lead to an untimely treatment. This document presents a single portable device capable of acquiring electrocardiographic signals and calculating a total of 19 HRV metrics. This reduces the time required, resulting in a timelier intervention. The device has an electrocardiographic signal acquisition card attached to a microcontroller capable of transmitting the cardiac signal wirelessly to a mobile device. In addition, a mobile application was designed to analyze the cardiac waveform. The device calculates the RR and different metrics. The application allows a user to visualize in real-time the cardiac signal and the 19 metrics. The information is exported to a cloud database for remote analysis. The study was performed under controlled conditions in the simulated hospital of the Universidad de la Sabana, Colombia. A total of 60 signals were acquired and analyzed. The device was compared against two reference systems. The results show a strong level of correlation (r > 0.95, p < 0.05) between the 19 metrics compared. Therefore, the use of the portable system evaluated in clinical scenarios controlled by medical specialists and researchers is recommended for the evaluation of the condition of the cardiac system.

Keywords: biological signal análisis, heart rate variability (HRV), HRV metrics, mobile app, portable device.

Procedia PDF Downloads 170
13347 A Contrastive Analysis on Hausa and Yoruba Adjectival Phrases

Authors: Abubakar Maikudi

Abstract:

Contrastive analysis is the method of analyzing the structure of any two languages with a view to determining the possible differential aspects of their systems irrespective of their genetic affinity or level of development. Contrastive analysis of two languages becomes useful when it is adequately describing the sound structure and grammatical structure of two languages, with comparative statements giving emphasis to the compatible items in the two systems. This research work uses comparative analysis theory to analyze adjective and adjectival phrases in Hausa and Yorùbá languages. The Hausa language belongs to the Chadic family of the Afro-Asiatic phylum, while the Yorùbá language belongs to the Benue-Congo family of the Niger-Congo phylum. The findings of the research clearly demonstrated that there are significant similarities in the adjectival phrase constructions of the two languages, i.e., nominal (Head) and post-nominal (Post-Head) use of the adjective, predicative function of an adjective, use of the reduplicative adjective, use of the comparative and superlative adjective, etc. However, there are dissimilarities in the adjectival phrase of the two languages in gender/number agreement and pre-nominal (Post-Head) use of adjectives.

Keywords: genetic affinity, contrastive analysis, phylum, pre-head, post-head

Procedia PDF Downloads 206
13346 Selection of Soil Quality Indicators of Rice Cropping Systems Using Minimum Data Set Influenced by Imbalanced Fertilization

Authors: Theresa K., Shanmugasundaram R., Kennedy J. S.

Abstract:

Nutrient supplements are indispensable for raising crops and to reap determining productivity. The nutrient imbalance between replenishment and crop uptake is attempted through the input of inorganic fertilizers. Excessive dumping of inorganic nutrients in soil cause stagnant and decline in yield. Imbalanced N-P-K ratio in the soil exacerbates and agitates the soil ecosystems. The study evaluated the fertilization practices of conventional (CFs), organic and Integrated Nutrient Management system (INM) on soil quality using key indicators and soil quality indices. Twelve rice farming fields of which, ten fields were having conventional cultivation practices, one field each was organic farming based and INM based cultivated under monocropping sequence in the Thondamuthur block of Coimbatore district were fixed and properties viz., physical, chemical and biological were studied for four cropping seasons to determine soil quality index (SQI). SQI was computed for conventional, organic and INM fields. Comparing conventional farming (CF) with organic and INM, CF was recorded with a lower soil quality index. While in organic and INM fields, the higher SQI value of 0.99 and 0.88 respectively were registered. CF₄ received with a super-optimal dose of N (250%) showed a lesser SQI value (0.573) as well as the yield (3.20 t ha⁻¹) and the CF6 which received 125 % N recorded the highest SQI (0.715) and yield (6.20 t ha⁻¹). Likewise, most of the CFs received higher N beyond the level of 125 % except CF₃ and CF₉, which recorded lower yields. CFs which received super-optimal P in the order of CF₆&CF₇>CF₁&CF₁₀ recorded lesser yields except for CF₆. Super-optimal K application also recorded lesser yield in CF₄, CF₇ and CF₉.

Keywords: rice cropping system, soil quality indicators, imbalanced fertilization, yield

Procedia PDF Downloads 135
13345 Mathematical Modelling and AI-Based Degradation Analysis of the Second-Life Lithium-Ion Battery Packs for Stationary Applications

Authors: Farhad Salek, Shahaboddin Resalati

Abstract:

The production of electric vehicles (EVs) featuring lithium-ion battery technology has substantially escalated over the past decade, demonstrating a steady and persistent upward trajectory. The imminent retirement of electric vehicle (EV) batteries after approximately eight years underscores the critical need for their redirection towards recycling, a task complicated by the current inadequacy of recycling infrastructures globally. A potential solution for such concerns involves extending the operational lifespan of electric vehicle (EV) batteries through their utilization in stationary energy storage systems during secondary applications. Such adoptions, however, require addressing the safety concerns associated with batteries’ knee points and thermal runaways. This paper develops an accurate mathematical model representative of the second-life battery packs from a cell-to-pack scale using an equivalent circuit model (ECM) methodology. Neural network algorithms are employed to forecast the degradation parameters based on the EV batteries' aging history to develop a degradation model. The degradation model is integrated with the ECM to reflect the impacts of the cycle aging mechanism on battery parameters during operation. The developed model is tested under real-life load profiles to evaluate the life span of the batteries in various operating conditions. The methodology and the algorithms introduced in this paper can be considered the basis for Battery Management System (BMS) design and techno-economic analysis of such technologies.

Keywords: second life battery, electric vehicles, degradation, neural network

Procedia PDF Downloads 37
13344 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 178
13343 Identification of Watershed Landscape Character Types in Middle Yangtze River within Wuhan Metropolitan Area

Authors: Huijie Wang, Bin Zhang

Abstract:

In China, the middle reaches of the Yangtze River are well-developed, boasting a wealth of different types of watershed landscape. In this regard, landscape character assessment (LCA) can serve as a basis for protection, management and planning of trans-regional watershed landscape types. For this study, we chose the middle reaches of the Yangtze River in Wuhan metropolitan area as our study site, wherein the water system consists of rich variety in landscape types. We analyzed trans-regional data to cluster and identify types of landscape characteristics at two levels. 55 basins were analyzed as variables with topography, land cover and river system features in order to identify the watershed landscape character types. For watershed landscape, drainage density and degree of curvature were specified as special variables to directly reflect the regional differences of river system features. Then, we used the principal component analysis (PCA) method and hierarchical clustering algorithm based on the geographic information system (GIS) and statistical products and services solution (SPSS) to obtain results for clusters of watershed landscape which were divided into 8 characteristic groups. These groups highlighted watershed landscape characteristics of different river systems as well as key landscape characteristics that can serve as a basis for targeted protection of watershed landscape characteristics, thus helping to rationally develop multi-value landscape resources and promote coordinated development of trans-regions.

Keywords: GIS, hierarchical clustering, landscape character, landscape typology, principal component analysis, watershed

Procedia PDF Downloads 207
13342 Appraisal of Humanitarian Supply Chain Risks Using Best-Worst Method

Authors: Ali Mohaghar, Iman Ghasemian Sahebi, Alireza Arab

Abstract:

In the last decades, increasing in human and natural disaster occurrence had very irreparable effects on human life. Hence, one of the important issues in humanitarian supply chain management is identifying and prioritizing the different risks and finding suitable solutions for encountering them at the time of disaster occurrence. This study is an attempt to provide a comprehensive review of humanitarian supply chain risks in a case study of Tehran Red Crescent Societies. For this purpose, Best-Worst method (BWM) has been used for analyzing the risks of the humanitarian supply chain. 22 risks of the humanitarian supply chain were identified based on the literature and interviews with four experts. According to BWM method, the importance of each risk was calculated. The findings showed that culture contexts, little awareness of people, and poor education system are the most important humanitarian supply chain risks. This research provides a useful guideline for managers so that they can benefit from the results to prioritize their solutions.

Keywords: Best-Worst Method, humanitarian logistics, humanitarian supply chain, risk management

Procedia PDF Downloads 299
13341 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 233
13340 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 60
13339 Qatari Licensure System as Perceived by Teachers and School Leaders

Authors: Abdullah Abu-Tineh, Hissa Sadiq, Fatma Al-Mutawah, Youmen Chaaban

Abstract:

The past 20 years have seen a proliferation of empirical research into various licensure systems. Extensive quantitative work investigates these systems of appraisal from different countries, but there is far less research on the implementation of the Qatari licensure system and the adoption of professional standards. In this paper, we provided a quantitatively and qualitatively descriptive look at the process that moves educators from their point of entry into the profession through their certification as accomplished professionals. Specifically, we focused on the perceptions of teachers and school leaders on the licensure system currently adopted by Ministry of Education and Higher Education in Qatar. The paper aims to inform progress towards a system of reliable, valid, and nationally appropriate teacher and school leader evaluation procedures. Such a system can support decision-making based on a common, comprehensive set of standards that ensures the placement of only the most effective educators in Qatari schools. This paper was made possible by NPRP grant # (NPRP7-1224-5-178) from the Qatar national research fund (a member of Qatar foundation) to Abdullah M. Abu-Tineh. The statements made herein are solely the responsibility of the author.

Keywords: licensure system, professional standards, professional portfolio, educator voice

Procedia PDF Downloads 214