Search results for: Green IT-outsourcing Assurance Model (GITAM)
8348 Determinants of Household Food Security in Addis Ababa City Administration
Authors: Estibe Dagne Mekonnen
Abstract:
In recent years, the prevalence of undernourishment was 30 percent for sub-Saharan Africa, compared with 16 percent for Asia and the Pacific (Ali, 2011). In Ethiopia, almost 40 percent of the total population in the country and 57 percent of Addis Ababa population lives below the international poverty line of US$ 1.25 per day (UNICEF, 2009). This study aims to analyze the determinant of household food secrity in Addis Ababa city administration. Primary data were collected from a survey of 256 households in the selected sub-city, namely Addis Ketema, Arada, and Kolfe Keranio, in the year 2022. Both Purposive and multi-stage cluster random sampling procedures were employed to select study areas and respondents. Descriptive statistics and order logistic regression model were used to test the formulated hypotheses. The result reveals that out of the total sampled households, 25% them were food secured, 13% were mildly food insecure, 26% were moderately food insecure and 36% were severely food insecure. The study indicates that household family size, house ownership, household income, household food source, household asset possession, household awareness on inflation, household access to social protection program, household access to credit and saving and household access to training and supervision on food security have a positive and significant effect on the likelihood of household food security status. However, marital status of household head, employment sector of household head, dependency ratio and household’s nonfood expenditure has a negative and significant influence on household food security status. The study finally suggests that the government in collaboration with financial institutions and NGO should work on sustaining household food security by creating awareness, providing credit, facilitate rural-urban linkage between producer and consumer and work on urban infrastructure improvement. Moreover, the governments also work closely and monitor consumer good suppliers, if possible find a way to subsidize consumable goods to more insecure households and make them to be food secured. Last but not least, keeping this country’s peace will play a crucial role to sustain food security.Keywords: determinants, household, food security, order logit model, Addis Ababa
Procedia PDF Downloads 748347 The Mental Workload of Intensive Care Unit Nurses in Performing Human-Machine Tasks: A Cross-Sectional Survey
Authors: Yan Yan, Erhong Sun, Lin Peng, Xuchun Ye
Abstract:
Aims: The present study aimed to explore Intensive Care Unit (ICU) nurses’ mental workload (MWL) and associated factors with it in performing human-machine tasks. Background: A wide range of emerging technologies have penetrated widely in the field of health care, and ICU nurses are facing a dramatic increase in nursing human-machine tasks. However, there is still a paucity of literature reporting on the general MWL of ICU nurses performing human-machine tasks and the associated influencing factors. Methods: A cross-sectional survey was employed. The data was collected from January to February 2021 from 9 tertiary hospitals in 6 provinces (Shanghai, Gansu, Guangdong, Liaoning, Shandong, and Hubei). Two-stage sampling was used to recruit eligible ICU nurses (n=427). The data were collected with an electronic questionnaire comprising sociodemographic characteristics and the measures of MWL, self-efficacy, system usability, and task difficulty. The univariate analysis, two-way analysis of variance (ANOVA), and a linear mixed model were used for data analysis. Results: Overall, the mental workload of ICU nurses in performing human-machine tasks was medium (score 52.04 on a 0-100 scale). Among the typical nursing human-machine tasks selected, the MWL of ICU nurses in completing first aid and life support tasks (‘Using a defibrillator to defibrillate’ and ‘Use of ventilator’) was significantly higher than others (p < .001). And ICU nurses’ MWL in performing human-machine tasks was also associated with age (p = .001), professional title (p = .002), years of working in ICU (p < .001), willingness to study emerging technology actively (p = .006), task difficulty (p < .001), and system usability (p < .001). Conclusion: The MWL of ICU nurses is at a moderate level in the context of a rapid increase in nursing human-machine tasks. However, there are significant differences in MWL when performing different types of human-machine tasks, and MWL can be influenced by a combination of factors. Nursing managers need to develop intervention strategies in multiple ways. Implications for practice: Multidimensional approaches are required to perform human-machine tasks better, including enhancing nurses' willingness to learn emerging technologies actively, developing training strategies that vary with tasks, and identifying obstacles in the process of human-machine system interaction.Keywords: mental workload, nurse, ICU, human-machine, tasks, cross-sectional study, linear mixed model, China
Procedia PDF Downloads 708346 Capacity for Care: A Management Model for Increasing Animal Live Release Rates, Reducing Animal Intake and Euthanasia Rates in an Australian Open Admission Animal Shelter
Authors: Ann Enright
Abstract:
More than ever, animal shelters need to identify ways to reduce the number of animals entering shelter facilities and the incidence of euthanasia. Managing animal overpopulation using euthanasia can have detrimental health and emotional consequences for the shelter staff involved. There are also community expectations with moral and financial implications to consider. To achieve the goals of reducing animal intake and the incidence of euthanasia, shelter best practice involves combining programs, procedures and partnerships to increase live release rates (LRR), reduce the incidence of disease, length of stay (LOS) and shelter intake whilst overall remaining financially viable. Analysing daily processes, tracking outcomes and implementing simple strategies enabled shelter staff to more effectively focus their efforts and achieve amazing results. The objective of this retrospective study was to assess the effect of implementing the capacity for care (C4C) management model. Data focusing on the average daily number of animals on site for a two year period (2016 – 2017) was exported from a shelter management system, Customer Logic (CL) Vet to Excel for manipulation and comparison. Following the implementation of C4C practices the average daily number of animals on site was reduced by >50%, (2016 average 103 compared to 2017 average 49), average LOS reduced by 50% from 8 weeks to 4 weeks and incidence of disease reduced from ≥ 70% to less than 2% of the cats on site at the completion of the study. The total number of stray cats entering the shelter due to council contracts reduced by 50% (486 to 248). Improved cat outcomes were attributed to strategies that increased adoptions and reduced euthanasia of poorly socialized cats, including foster programs. To continue to achieve improvements in LRR and LOS, strategies to decrease intake further would be beneficial, for example, targeted sterilisation programs. In conclusion, the study highlighted the benefits of using C4C as a management tool, delivering a significant reduction in animal intake and euthanasia with positive emotional, financial and community outcomes.Keywords: animal welfare, capacity for care, cat, euthanasia, length of stay, managed intake, shelter
Procedia PDF Downloads 1398345 Modeling of the Cavitation by Bubble around a NACA0009 Profile
Authors: L. Hammadi, D. Boukhaloua
Abstract:
In this study, a numerical model was developed to predict cavitation phenomena around a NACA0009 profile. The equations of the Rayleigh-Plesset and modified Rayleigh-Plesset are used to modeling the cavitation by bubble around a NACA0009 profile. The study shows that the distributions of pressures around extrados and intrados of profile for angle of incidence equal zero are the same. The study also shows that the increase in the angle of incidence makes it possible to differentiate the pressures on the intrados and the extrados.Keywords: cavitation, NACA0009 profile, flow, pressure coefficient
Procedia PDF Downloads 1818344 Interplay of Physical Activity, Hypoglycemia, and Psychological Factors: A Longitudinal Analysis in Diabetic Youth
Authors: Georges Jabbour
Abstract:
Background and aims: This two-year follow-up study explores the long-term sustainability of physical activity (PA) levels in young people with type 1 diabetes, focusing on the relationship between PA, hypoglycemia, and behavioral scores. The literature highlights the importance of PA and its health benefits, as well as the barriers to engaging in PA practices. Studies have shown that individuals with high levels of vigorous physical activity have higher fear of hypoglycemia (FOH) scores and more hypoglycemia episodes. Considering that hypoglycemia episodes are a major barrier to physical activity, and many studies reported a negative association between PA and high FOH scores, it cannot be guaranteed that those experiencing hypoglycemia over a long period will remain active. Building on that, the present work assesses whether high PA levels, despite elevated hypoglycemia risk, can be maintained over time. The study tracks PA levels at one and two years, correlating them with hypoglycemia instances and Fear of Hypoglycemia (FOH) scores. Materials and methods: A self-administered questionnaire was completed by 61 youth with T1D, and their PA was assessed. Hypoglycemia episodes, fear of hypoglycemia scores and HbA1C levels were collected. All assessments were realized at baseline (visit 0: V0), one year (V1) and two years later (V2). For the purpose of the present work, we explore the relationships between PA levels, hypoglycemia episodes, and FOH scores at each time point. We used multiple linear regression to model the mean outcomes for each exposure of interest. Results: Findings indicate no changes in total moderate to vigorous PA (MVPA) and VPA levels among visits, and HbA1c (%) was negatively correlated with the total amount of VPA per day in minutes (β= -0.44; p=0.01, β= -0.37; p=0.04, and β= -0.66; p=0.01 for V0, V1, and V2, respectively). Our linear regression model reported a significant negative correlation between VPA and FOH across the visits (β=-0.59, p=0.01; β= -0.44, p=0.01; and β= -0.34, p=0.03 for V0, V1, and V2, respectively), and HbA1c (%) was influenced by both the number of hypoglycemic episodes and FOH score at V2 (β=0.48; p=0.02 and β=0.38; p=0.03, respectively). Conclusion: The sustainability of PA levels and HbA1c (%) in young individuals with type 1 diabetes is influenced by various factors, including fear of hypoglycemia. Understanding these complex interactions is essential for developing effective interventions to promote sustained PA levels in this population. Our results underline the necessity of a multi-strategic approach to promoting active lifestyles among diabetic youths. This approach should synergize PA enhancement with vigilant glucose monitoring and effective FOH management.Keywords: physical activity, hypoglycemia, fear of hypoglycemia, youth
Procedia PDF Downloads 268343 Composition Dependence of Ni 2p Core Level Shift in Fe1-xNix Alloys
Authors: Shakti S. Acharya, V. R. R. Medicherla, Rajeev Rawat, Komal Bapna, Deepnarayan Biswas, Khadija Ali, K. Maiti
Abstract:
The discovery of invar effect in 35% Ni concentration Fe1-xNix alloy has stimulated enormous experimental and theoretical research. Elemental Fe and low Ni concentration Fe1-xNix alloys which possess body centred cubic (bcc) crystal structure at ambient temperature and pressure transform to hexagonally close packed (hcp) phase at around 13 GPa. Magnetic order was found to be absent at 11K for Fe92Ni8 alloy when subjected to a high pressure of 26 GPa. The density functional theoretical calculations predicted substantial hyperfine magnetic fields, but were not observed in Mossbaur spectroscopy. The bulk modulus of fcc Fe1-xNix alloys with Ni concentration more than 35%, is found to be independent of pressure. The magnetic moment of Fe is also found be almost same in these alloys from 4 to 10 GPa pressure. Fe1-xNix alloys exhibit a complex microstructure which is formed by a series of complex phase transformations like martensitic transformation, spinodal decomposition, ordering, mono-tectoid reaction, eutectoid reaction at temperatures below 400°C. Despite the existence of several theoretical models the field is still in its infancy lacking full knowledge about the anomalous properties exhibited by these alloys. Fe1-xNix alloys have been prepared by arc melting the high purity constituent metals in argon ambient. These alloys have annealed at around 3000C in vacuum sealed quartz tube for two days to make the samples homogeneous. These alloys have been structurally characterized by x-ray diffraction and were found to exhibit a transition from bcc to fcc for x > 0.3. Ni 2p core levels of the alloys have been measured using high resolution (0.45 eV) x-ray photoelectron spectroscopy. Ni 2p core level shifts to lower binding energy with respect to that of pure Ni metal giving rise to negative core level shifts (CLSs). Measured CLSs exhibit a linear dependence in fcc region (x > 0.3) and were found to deviate slightly in bcc region (x < 0.3). ESCA potential model fails correlate CLSs with site potentials or charges in metallic alloys. CLSs in these alloys occur mainly due to shift in valence bands with composition due to intra atomic charge redistribution.Keywords: arc melting, core level shift, ESCA potential model, valence band
Procedia PDF Downloads 3808342 System Engineering Design of Offshore Oil Drilling Production Platform from Marine Environment
Authors: C. Njoku Paul
Abstract:
This paper deals with systems engineering applications design for offshore oil drilling production platform in the Nigerian Marine Environment. Engineering Design model of the distribution and accumulation of petroleum hydrocarbons discharged into marine environment production platform and sources of impact of an offshore is treated.Keywords: design of offshore oil drilling production platform, marine, environment, petroleum hydrocarbons
Procedia PDF Downloads 5418341 Convergence Analysis of Reactive Power Based Schemes Used in Sensorless Control of Induction Motors
Authors: N. Ben Si Ali, N. Benalia, N. Zerzouri
Abstract:
Many electronic drivers for the induction motor control are based on sensorless technologies. Speed and torque control is usually attained by application of a speed or position sensor which requires the additional mounting space, reduce the reliability and increase the cost. This paper seeks to analyze dynamical performances and sensitivity to motor parameter changes of reactive power based technique used in sensorless control of induction motors. Validity of theoretical results is verified by simulation.Keywords: adaptive observers, model reference adaptive system, RP-based estimator, sensorless control, stability analysis
Procedia PDF Downloads 5478340 FEM Analysis of an Occluded Ear Simulator with Narrow Slit Pathway
Authors: Manabu Sasajima, Takao Yamaguchi, Yoshio Koike, Mitsuharu Watanabe
Abstract:
This paper discusses the propagation of sound waves in air, specifically in narrow rectangular pathways of an occluded-ear simulator for acoustic measurements. In narrow pathways, both the speed of sound and the phase of the sound waves are affected by the damping of the air viscosity. Herein, we propose a new finite-element method (FEM) that considers the effects of the air viscosity. The method was developed as an extension of existing FEMs for porous, sound-absorbing materials. The results of a numerical calculation for a three-dimensional ear-simulator model using the proposed FEM were validated by comparing with theoretical lumped-parameter modeling analysis and standard values.Keywords: ear simulator, FEM, simulation, viscosity
Procedia PDF Downloads 4448339 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories
Authors: Berna Çalışkan
Abstract:
The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.Keywords: water resources management, hydro tool, water protection, transportation
Procedia PDF Downloads 568338 Role of ICT and Wage Inequality in Organization
Authors: Shoji Katagiri
Abstract:
This study deals with wage inequality in organization and shows the relationship between ICT and wage in organization. To do so, we incorporate ICT’s factors in organization into our model. ICT’s factors are efficiencies of Enterprise Resource Planning (ERP), Computer Assisted Design/Computer Assisted Manufacturing (CAD/CAM), and NETWORK. The improvement of ICT’s factors decrease the learning cost to solve problem pertaining to the hierarchy in organization. The improvement of NETWORK increases the wage inequality within workers and decreases within managers and entrepreneurs. The improvements of CAD/CAM and ERP increases the wage inequality within all agent, and partially increase it between the agents in hierarchy.Keywords: endogenous economic growth, ICT, inequality, capital accumulation
Procedia PDF Downloads 2608337 Predicting Daily Patient Hospital Visits Using Machine Learning
Authors: Shreya Goyal
Abstract:
The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.Keywords: machine learning, SVM, HIPAA, data
Procedia PDF Downloads 658336 Developing a Model for the Lexical Analysis of Key Works of Children's Literature
Authors: Leigha Inman
Abstract:
One of the most cutting-edge interdisciplinary topics in the social sciences is the application of understandings from the humanities to traditionally social scientific disciplines such as education studies. This paper proposes such a topic. It has often been observed that children enjoy literature. The role of reading in the development of reading ability is an important area of research. However, the role of vocabulary in reading development has long been neglected. This paper reports an investigation into the number of words found in key works of children's literature and attempts to correlate that figure with years elapsed since publication of the work. Pedagogical implications will be discussed.Keywords: educational pedagogy, young learners, vocabulary teaching, reading development
Procedia PDF Downloads 1188335 Modeling the Current and Future Distribution of Anthus Pratensis under Climate Change
Authors: Zahira Belkacemi
Abstract:
One of the most important tools in conservation biology is information on the geographic distribution of species and the variables determining those patterns. In this study, we used maximum-entropy niche modeling (Maxent) to predict the current and future distribution of Anthus pratensis using climatic variables. The results showed that the species would not be highly affected by the climate change in shifting its distribution; however, the results of this study should be improved by taking into account other predictors, and that the NATURA 2000 protected sites will be efficient at 42% in protecting the species.Keywords: anthus pratensis, climate change, Europe, species distribution model
Procedia PDF Downloads 1448334 Petrogenetic Model of Formation of Orthoclase Gabbro of the Dzirula Crystalline Massif, the Caucasus
Authors: David Shengelia, Tamara Tsutsunava, Manana Togonidze, Giorgi Chichinadze, Giorgi Beridze
Abstract:
Orthoclase gabbro intrusive exposes in the Eastern part of the Dzirula crystalline massif of the Central Transcaucasian microcontinent. It is intruded in the Baikal quartz-diorite gneisses as a stock-like body. The intrusive is characterized by heterogeneity of rock composition: variability of mineral content and irregular distribution of rock-forming minerals. The rocks are represented by pyroxenites, gabbro-pyroxenites and gabbros of different composition – K-feldspar, pyroxene-hornblende and biotite bearing varieties. Scientific views on the genesis and age of the orthoclase gabbro intrusive are considerably different. Based on the long-term pertogeochemical and geochronological investigations of the intrusive with such an extraordinary composition the authors came to the following conclusions. According to geological and geophysical data, it is stated that in the Saurian orogeny horizontal tectonic layering of the Earth’s crust of the Central Transcaucasian microcontinent took place. That is precisely this fact that explains the formation of the orthoclase gabbro intrusive. During the tectonic doubling of the Earth’s crust of the mentioned microcontinent thick tectonic nappes of mafic and sialic layers overlap the sialic basement (‘inversion’ layer). The initial magma of the intrusive was of high-temperature basite-ultrabasite composition, crystallization products of which are pyroxenites and gabbro-pyroxenites. Petrochemical data of the magma attest to its formation in the Upper mantle and partially in the ‘crustal astenolayer’. Then, a newly formed overheated dry magma with phenocrysts of clinopyrocxene and basic plagioclase intruded into the ‘inversion’ layer. From the new medium it was enriched by the volatile components causing the selective melting and as a result the formation of leucocratic quartz-feldspar material. At the same time in the basic magma intensive transformation of pyroxene to hornblende was going on. The basic magma partially mixed with the newly formed acid magma. These different magmas intruded first into the allochthonous basite layer without its significant transformation and then into the upper sialic layer and crystallized here at a depth of 7-10 km. By petrochemical data the newly formed leucocratic granite magma belongs to the S type granites, but the above mentioned mixed magma – to H (hybrid) type. During the final stage of magmatic processes the gabbroic rocks impregnated with high-temperature feldspar-bearing material forming anorthoclase or orthoclase. Thus, so called ‘orthoclase gabbro’ includes the rocks of various genetic groups: 1. protolith of gabbroic intrusive; 2. hybrid rock – K-feldspar gabbro and 3. leucocratic quartz-feldspar bearing rock. Petrochemical and geochemical data obtained from the hybrid gabbro and from the inrusive protolith differ from each other. For the identification of petrogenetic model of the orthoclase gabbro intrusive formation LA-ICP-MS- U-Pb zircon dating has been conducted in all three genetic types of gabbro. The zircon age of the protolith – mean 221.4±1.9 Ma and of hybrid K-feldspar gabbro – mean 221.9±2.2 Ma, records crystallization time of the intrusive, but the zircon age of quartz-feldspar bearing rocks – mean 323±2.9 Ma, as well as the inherited age (323±9, 329±8.3, 332±10 and 335±11 Ma) of hybrid K-feldspar gabbro corresponds to the formation age of Late Variscan granitoids widespread in the Dzirula crystalline massif.Keywords: The Caucasus, isotope dating, orthoclase-bearing gabbro, petrogenetic model
Procedia PDF Downloads 3438333 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI
Procedia PDF Downloads 1218332 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 668331 Fusion of MOLA-based DEMs and HiRISE Images for Large-Scale Mars Mapping
Authors: Ahmed F. Elaksher, Islam Omar
Abstract:
In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were then digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. Different transformation models, including the affine and projective transformation models, were used with different sets and distributions of tie points. Additionally, we evaluated the use of the MOLA elevations in co-registering the MOLA and HiRISE datasets. The planimetric RMSEs achieved for each model are reported. Results suggested the use of 3D-2D transformation models.Keywords: photogrammetry, Mars, MOLA, HiRISE
Procedia PDF Downloads 788330 Response of a Bridge Crane during an Earthquake
Authors: F. Fekak, A. Gravouil, M. Brun, B. Depale
Abstract:
During an earthquake, a bridge crane may be subjected to multiple impacts between crane wheels and rail. In order to model such phenomena, a time-history dynamic analysis with a multi-scale approach is performed. The high frequency aspect of the impacts between wheels and rails is taken into account by a Lagrange explicit event-capturing algorithm based on a velocity-impulse formulation to resolve contacts and impacts. An implicit temporal scheme is used for the rest of the structure. The numerical coupling between the implicit and the explicit schemes is achieved with a heterogeneous asynchronous time-integrator.Keywords: bridge crane, earthquake, dynamic analysis, explicit, implicit, impact
Procedia PDF Downloads 3048329 Blood Glucose Measurement and Analysis: Methodology
Authors: I. M. Abd Rahim, H. Abdul Rahim, R. Ghazali
Abstract:
There is numerous non-invasive blood glucose measurement technique developed by researchers, and near infrared (NIR) is the potential technique nowadays. However, there are some disagreements on the optimal wavelength range that is suitable to be used as the reference of the glucose substance in the blood. This paper focuses on the experimental data collection technique and also the analysis method used to analyze the data gained from the experiment. The selection of suitable linear and non-linear model structure is essential in prediction system, as the system developed need to be conceivably accurate.Keywords: linear, near-infrared (NIR), non-invasive, non-linear, prediction system
Procedia PDF Downloads 4618328 Heat and Mass Transfer of an Oscillating Flow in a Porous Channel with Chemical Reaction
Authors: Zahra Neffah, Henda Kahalerras
Abstract:
A numerical study is made in a parallel-plate porous channel subjected to an oscillating flow and an exothermic chemical reaction on its walls. The flow field in the porous region is modeled by the Darcy–Brinkman–Forchheimer model and the finite volume method is used to solve the governing equations. The effects of the modified Frank-Kamenetskii (FKm) and Damköhler (Dm) numbers, the amplitude of oscillation (A), and the Strouhal number (St) are examined. The main results show an increase of heat and mass transfer rates with A and St, and their decrease with FKm and Dm.Keywords: chemical reaction, heat and mass transfer, oscillating flow, porous channel
Procedia PDF Downloads 4138327 Including All Citizens Pathway (IACP): Transforming Post-Secondary Education Using Inclusion and Accessibility as Foundation
Authors: Fiona Whittington-Walsh
Abstract:
Including All Citizens Pathway (IACP) is addressing the systems wide discrimination that students with disabilities experience throughout the education system. IACP offers a wide, institutional support structure so that all students, including students with intellectual/developmental disabilities, are included and can succeed. The entire process from admissions, course selection, course instruction, graduation is designed to address systemic discrimination while supporting learners and faculty. The inclusive and accessible pedagogical model that is the foundation of IACP opens the doors of post-secondary education by making existing academic courses environments where all students can participate and succeed. IACP is about transforming teaching, not modifying, or adapting the curriculum or essential knowledge and skill sets that are required learning outcomes. Universal Design for Learning (UDL) principles are applied to instructional teaching strategies such as lectures, presentations, and assessment tools. Created in 2016 as a research pilot, IACP is one of the first fully inclusive for credit post-secondary options available. The pilot received numerous external and internal grants to support its initiative to investigate and assess the teaching strategies and techniques that support student learning of essential knowledge and skill sets. IACP pilot goals included: (1) provide a successful pilot as a model of inclusive and accessible pedagogy; (2) create a teacher’s guide to assist other instructors in transforming their teaching to reach a wide range of learners; (3) identify policy barriers located within the educational system; and (4) provide leadership and encouraging innovative and inclusive pedagogical practices. The pilot was a success and in 2020 the first cohort of students graduated with an exit credential that pre-exists IACP and consists of ten academic courses. The University has committed to continue IACP and has developed a sustainable model. Each new academic year a new cohort of IACP students starts their post-secondary educational journey, while two additional instructors are mentored with the pedagogy. The pedagogical foundation of IACP has far-reaching potential including, but not limited to, programs that offer services for international students whose first language is not English as well as influencing pedagogical reform in secondary and post-secondary education. IACP also supports universities in satisfying educational standards that are or will be included in accessibility/disability legislation. This session will present information about IACP, share examples of systems transformation, hear from students and instructors, and provide participatory experiential activities that demonstrate the transformative techniques. We will be drawing from the experiences of a recent course that explored research documenting the lived experiences of students with disabilities in post-secondary institutes in B.C (Whittington-Walsh). Students created theatrical scenes out of the data and presented it using Forum Theatre method. Forum Theatre was used to create conversations, challenge stereotypes, and build connections between ableism, disability justice, Indigeneity, and social policy.Keywords: disability justice, inclusive education, pedagogical transformation, systems transformation
Procedia PDF Downloads 98326 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 1778325 Jet Impingement Heat Transfer on a Rib-Roughened Flat Plate
Authors: A. H. Alenezi
Abstract:
Cooling by impingement jet is known to have a significant high local and average heat transfer coefficient which make it widely used in industrial cooling systems. The heat transfer characteristics of an impinging jet on rib-roughened flat plate has been investigated numerically. This paper was set out to investigate the effect of rib height on the heat transfer rate. Since the flow needs to have enough spacing after passing the rib to allow reattachment especially for high Reynolds numbers, this study focuses on finding the optimum rib height which would be the best to maximize the heat transfer rate downstream the plate. This investigation employs a round nozzle with hydraulic diameter (Dh) of 13.5 mm, Jet-to-target distance of (H/D) of 4, rib location=1.5D and and finally jet angels of 45˚ and 90˚ under the influence of Re =10,000.Keywords: jet impingement, CFD, turbulence model, heat transfer
Procedia PDF Downloads 3518324 A CM-Based Model for 802.11 Networks Security Policies Enforcement
Authors: Karl Mabiala Dondia, Jing Ma
Abstract:
In recent years, networks based on the 802.11 standards have gained a prolific deployment. The reason for this massive acceptance of the technology by both home users and corporations is assuredly due to the "plug-and-play" nature of the technology and the mobility. The lack of physical containment due to inherent nature of the wireless medium makes maintenance very challenging from a security standpoint. This study examines via continuous monitoring various predictable threats that 802.11 networks can face, how they are executed, where each attack may be executed and how to effectively defend against them. The key goal is to identify the key components of an effective wireless security policy.Keywords: wireless LAN, IEEE 802.11 standards, continuous monitoring, security policy
Procedia PDF Downloads 3808323 A Method for Calculating Dew Point Temperature in the Humidity Test
Authors: Wu Sa, Zhang Qian, Li Qi, Wang Ye
Abstract:
Currently in humidity tests having not put the Dew point temperature as a control parameter, this paper selects wet and dry bulb thermometer to measure the vapor pressure, and introduces several the saturation vapor pressure formulas easily calculated on the controller. Then establish the Dew point temperature calculation model to obtain the relationship between the Dew point temperature and vapor pressure. Finally check through the 100 groups of sample in the range of 0-100 ℃ from "Psychrometric handbook", find that the average error is small. This formula can be applied to calculate the Dew point temperature in the humidity test.Keywords: dew point temperature, psychrometric handbook, saturation vapor pressure, wet and dry bulb thermometer
Procedia PDF Downloads 4898322 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.Keywords: multimodal transportation, demand modeling, travel behavior, statistical models
Procedia PDF Downloads 1738321 A Case Study of Typhoon Tracks: Insights from the Interaction between Typhoon Hinnamnor and Ocean Currents in 2022
Authors: Wei-Kuo Soong
Abstract:
The forecasting of typhoon tracks remains a formidable challenge, primarily attributable to the paucity of observational data in the open sea and the intricate influence of weather systems at varying scales. This study investigates the case of Typhoon Hinnamnor in 2022, examining its trajectory and intensity fluctuations in relation to the interaction with a concurrent tropical cyclone and sea surface temperatures (SST). Utilizing the Weather Research and Forecasting Model (WRF), to simulate and analyze the interaction between Typhoon Hinnamnor and its environmental factors, shedding light on the mechanisms driving typhoon development and enhancing forecasting capabilities.Keywords: typhoon, sea surface temperature, forecasting, WRF
Procedia PDF Downloads 528320 Experimental Quantification of the Intra-Tow Resin Storage Evolution during RTM Injection
Authors: Mathieu Imbert, Sebastien Comas-Cardona, Emmanuelle Abisset-Chavanne, David Prono
Abstract:
Short cycle time Resin Transfer Molding (RTM) applications appear to be of great interest for the mass production of automotive or aeronautical lightweight structural parts. During the RTM process, the two components of a resin are mixed on-line and injected into the cavity of a mold where a fibrous preform has been placed. Injection and polymerization occur simultaneously in the preform inducing evolutions of temperature, degree of cure and viscosity that furthermore affect flow and curing. In order to adjust the processing conditions to reduce the cycle time, it is, therefore, essential to understand and quantify the physical mechanisms occurring in the part during injection. In a previous study, a dual-scale simulation tool has been developed to help determining the optimum injection parameters. This tool allows tracking finely the repartition of the resin and the evolution of its properties during reactive injections with on-line mixing. Tows and channels of the fibrous material are considered separately to deal with the consequences of the dual-scale morphology of the continuous fiber textiles. The simulation tool reproduces the unsaturated area at the flow front, generated by the tow/channel difference of permeability. Resin “storage” in the tows after saturation is also taken into account as it may significantly affect the repartition and evolution of the temperature, degree of cure and viscosity in the part during reactive injections. The aim of the current study is, thanks to experiments, to understand and quantify the “storage” evolution in the tows to adjust and validate the numerical tool. The presented study is based on four experimental repeats conducted on three different types of textiles: a unidirectional Non Crimp Fabric (NCF), a triaxial NCF and a satin weave. Model fluids, dyes and image analysis, are used to study quantitatively, the resin flow in the saturated area of the samples. Also, textiles characteristics affecting the resin “storage” evolution in the tows are analyzed. Finally, fully coupled on-line mixing reactive injections are conducted to validate the numerical model.Keywords: experimental, on-line mixing, high-speed RTM process, dual-scale flow
Procedia PDF Downloads 1668319 A Study of Different Retail Models That Penetrates South African Townships
Authors: Beaula, M. Kruger, Silindisipho, T. Belot
Abstract:
Small informal retailers are considered one of the most important features of developing countries around the world. Those small informal retailers form part of the local communities in South African townships and are estimated to be more than 100,000 across the country. The township economic landscape has changed over time in South Africa. The traditional small informal retailers in South African Townships have been faced with numerous challenges of increasing competition; an increase in the number of local retail shops and foreign-owned shops. There is evidence that the South African personal and disposable income has increased amongst black African consumers. Historically, people residing in townships were restricted to informal retail shops; however, this has changed due to the growing number of formal large retail chains entering into the township market. The larger retail chains are aware of the improved income levels of the middle-income townships residence and as a result, larger retailers have followed certain strategies such as; (1) retail format development; (2) diversification growth strategy; (3) market penetration growth strategy and (4) market expansion. This research did a comparative analysis between the different retail models developed by Pick n Pay, Spar and Shoprite. The research methodology employed for this study was of a qualitative nature and made use of a case study to conduct a comparative analysis between larger retailers. A questionnaire was also designed to obtain data from existing smaller retailers. The study found that larger retailers have developed smaller retail formats to compete with the traditional smaller retailers operating in South African townships. Only one out of the two large retailers offers entrepreneurs a franchise model. One of the big retailers offers the opportunity to employ between 15 to 20 employees while the others are subject to the outcome of a feasibility study. The response obtained from the entrepreneurs in the townships were mixed, while some found their presence as having a “negative impact,” which has increased competition; others saw them as a means to obtain a variety of products. This research found that the most beneficial retail model for both bigger retail and existing and new entrepreneurs are from Pick n Pay. The other retail format models are more beneficial for the bigger retailers and not to new and existing entrepreneurs.Keywords: Pick n Pay, retailers, shoprite, spar, townships
Procedia PDF Downloads 195