Search results for: critical analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30755

Search results for: critical analysis

24245 Studying in the Outback: A Hermeneutic Phenomenological Study of the Lived Experience of Women in Regional, Rural and Remote Areas Studying Nursing Online

Authors: Keden Montgomery, Kathie Ardzejewska, Alison Casey, Rosemarie Hogan

Abstract:

Research was undertaken to explore the question “what is known about the experiences of regional, rural and remote Australian women undertaking a Bachelor of Nursing program delivered online?”. The findings will support future research aimed at improving the retention and completion rates of women studying nursing in regional, rural and remote areas.  There is a critical shortage of nurses working in regional, rural and remote (RRR) Australia. It is well supported that this shortage of nurses is most likely to be addressed by nursing students who are completing their studies in RRR areas. Despite this, students from RRR Australia remain an equity group and experience poorer outcomes than their metropolitan counterparts. Completion rates for RRR students who enrol in tertiary education courses are much less than students from metropolitan areas. In addition to this, RRR students are less likely than students from metropolitan areas to gain a tertiary level qualification at all, and even less likely to gain a Bachelor level degree which is required for Registered Nurses. Supporting students to remain in regional, rural and remote areas while they study reduces the need for students to relocate to metropolitan areas and to continue living and working in RRR areas after graduation. This research holds implications for workforce shortages internationally.

Keywords: nurse education, online education, regional, rural, remote, workforce

Procedia PDF Downloads 84
24244 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 140
24243 The Role of Digital Technology in Crime Prevention: a Case Study of Cellular Forensics Unit, Capital City Police Peshawar-Pakistan

Authors: Muhammad Ashfaq

Abstract:

Main theme: This prime focus of this study is on the role of digital technology in crime prevention, with special focus on Cellular Forensic Unit, Capital City Police Peshawar-Khyber Pakhtunkhwa-Pakistan. Objective(s) of the study: The prime objective of this study is to provide statistics, strategies and pattern of analysis used for crime prevention in Cellular Forensic Unit of Capital City Police Peshawar, Khyber Pakhtunkhwa-Pakistan. Research Method and Procedure: Qualitative method of research has been used in the study for obtaining secondary data from research wing and Information Technology (IT) section of Peshawar police. Content analysis was the method used for the conduction of the study. This study is delimited to Capital City Police and Cellular Forensic Unit Peshawar-KP, Pakistan. information technologies.Major finding(s): It is evident that the old traditional approach will never provide solutions for better management in controlling crimes. The best way to control crimes and promotion of proactive policing is to adopt new technologies. The study reveals that technology have transformed police more effective and vigilant as compared to traditional policing. The heinous crimes like abduction, missing of an individual, snatching, burglaries and blind murder cases are now traceable with the help of technology.Recommendation(s): From the analysis of the data, it is reflected that Information Technology (IT) expert should be recruited along with research analyst to timely assist and facilitate operational as well as investigation units of police .A mobile locator should be Provided to Cellular Forensic Unit to timely apprehend the criminals .Latest digital analysis software should be provided to equip the Cellular Forensic Unit.

Keywords: crime-prevention, cellular-forensic unit-pakistan, crime prevention-digital-pakistan, crminology-pakistan

Procedia PDF Downloads 77
24242 Chaos Analysis of a 3D Finance System and Generalized Synchronization for N-Dimension

Authors: Muhammad Fiaz

Abstract:

The article in hand is the study of complex features like Zero Hopf Bifurcation, Chaos and Synchronization of integer and fractional order version of a new 3D finance system. Trusted tools of averaging theory and active control method are utilized for investigation of Zero Hopf bifurcation and synchronization for both versions respectively. Inventiveness of the paper is to find the answer of a question that is it possible to find a chaotic system which can be synchronized with any other of the same dimension? Based on different examples we categorically develop a theory that if a couple of master and slave chaotic dynamical system is synchronized by selecting a suitable gain matrix with special conditions then the master system is synchronized with any chaotic dynamical system of the same dimension. With the help of this study we developed generalized theorems for synchronization of n-dimension dynamical systems for integer as well as fractional versions. it proposed that this investigation will contribute a lot to control dynamical systems and only a suitable gain matrix with special conditions is enough to synchronize the system under consideration with any other chaotic system of the same dimension. Chaotic properties of fractional version of the new finance system are also analyzed at fractional order q=0.87. Simulations results, where required, also provided for authenticity of analytical study.

Keywords: complex analysis, chaos, generalized synchronization, control dynamics, fractional order analysis

Procedia PDF Downloads 62
24241 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction

Authors: Pontus Backstrom

Abstract:

In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.

Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling

Procedia PDF Downloads 130
24240 Analysis of Radial Pulse Using Nadi-Parikshan Yantra

Authors: Ashok E. Kalange

Abstract:

Diagnosis according to Ayurveda is to find the root cause of a disease. Out of the eight different kinds of examinations, Nadi-Pariksha (pulse examination) is important. Nadi-Pariksha is done at the root of the thumb by examining the radial artery using three fingers. Ancient Ayurveda identifies the health status by observing the wrist pulses in terms of 'Vata', 'Pitta' and 'Kapha', collectively called as tridosha, as the basic elements of human body and in their combinations. Diagnosis by traditional pulse analysis – NadiPariksha - requires a long experience in pulse examination and a high level of skill. The interpretation tends to be subjective, depending on the expertise of the practitioner. Present work is part of the efforts carried out in making Nadi-Parikshan objective. Nadi Parikshan Yantra (three point pulse examination system) is developed in our laboratory by using three pressure sensors (one each for the Vata, Pitta and Kapha points on radial artery). The radial pulse data was collected of a large number of subjects. The radial pulse data collected is analyzed on the basis of relative amplitudes of the three point pulses as well as in frequency and time domains. The same subjects were examined by Ayurvedic physician (Nadi Vaidya) and the dominant Dosha - Vata, Pitta or Kapha - was identified. The results are discussed in details in the paper.

Keywords: Nadi Parikshan Yantra, Tridosha, Nadi Pariksha, human pulse data analysis

Procedia PDF Downloads 188
24239 Nanoparticles Using in Chiral Analysis with Different Methods of Separation

Authors: Bounoua Nadia, Rebizi Mohamed Nadjib

Abstract:

Chiral molecules in relation to particular biological roles are stereoselective. Enantiomers differ significantly in their biochemical responses in a biological environment. Despite the current advancement in drug discovery and pharmaceutical biotechnology, the chiral separation of some racemic mixtures continues to be one of the greatest challenges because the available techniques are too costly and time-consuming for the assessment of therapeutic drugs in the early stages of development worldwide. Various nanoparticles became one of the most investigated and explored nanotechnology-derived nanostructures, especially in chirality, where several studies are reported to improve the enantiomeric separation of different racemic mixtures. The production of surface-modified nanoparticles has contributed to these limitations in terms of sensitivity, accuracy, and enantioselectivity that can be optimized and therefore makes these surface-modified nanoparticles convenient for enantiomeric identification and separation.

Keywords: chirality, enantiomeric recognition, selectors, analysis, surface-modified nanoparticles

Procedia PDF Downloads 87
24238 Dynamic Exergy Analysis for the Built Environment: Fixed or Variable Reference State

Authors: Valentina Bonetti

Abstract:

Exergy analysis successfully helps optimizing processes in various sectors. In the built environment, a second-law approach can enhance potential interactions between constructions and their surrounding environment and minimise fossil fuel requirements. Despite the research done in this field in the last decades, practical applications are hard to encounter, and few integrated exergy simulators are available for building designers. Undoubtedly, an obstacle for the diffusion of exergy methods is the strong dependency of results on the definition of its 'reference state', a highly controversial issue. Since exergy is the combination of energy and entropy by means of a reference state (also called "reference environment", or "dead state"), the reference choice is crucial. Compared to other classical applications, buildings present two challenging elements: They operate very near to the reference state, which means that small variations have relevant impacts, and their behaviour is dynamical in nature. Not surprisingly then, the reference state definition for the built environment is still debated, especially in the case of dynamic assessments. Among the several characteristics that need to be defined, a crucial decision for a dynamic analysis is between a fixed reference environment (constant in time) and a variable state, which fluctuations follow the local climate. Even if the latter selection is prevailing in research, and recommended by recent and widely-diffused guidelines, the fixed reference has been analytically demonstrated as the only choice which defines exergy as a proper function of the state in a fluctuating environment. This study investigates the impact of that crucial choice: Fixed or variable reference. The basic element of the building energy chain, the envelope, is chosen as the object of investigation as common to any building analysis. Exergy fluctuations in the building envelope of a case study (a typical house located in a Mediterranean climate) are confronted for each time-step of a significant summer day, when the building behaviour is highly dynamical. Exergy efficiencies and fluxes are not familiar numbers, and thus, the more easy-to-imagine concept of exergy storage is used to summarize the results. Trends obtained with a fixed and a variable reference (outside air) are compared, and their meaning is discussed under the light of the underpinning dynamical energy analysis. As a conclusion, a fixed reference state is considered the best choice for dynamic exergy analysis. Even if the fixed reference is generally only contemplated as a simpler selection, and the variable state is often stated as more accurate without explicit justifications, the analytical considerations supporting the adoption of a fixed reference are confirmed by the usefulness and clarity of interpretation of its results. Further discussion is needed to address the conflict between the evidence supporting a fixed reference state and the wide adoption of a fluctuating one. A more robust theoretical framework, including selection criteria of the reference state for dynamical simulations, could push the development of integrated dynamic tools and thus spread exergy analysis for the built environment across the common practice.

Keywords: exergy, reference state, dynamic, building

Procedia PDF Downloads 224
24237 Monitoring the Production of Large Composite Structures Using Dielectric Tool Embedded Capacitors

Authors: Galatee Levadoux, Trevor Benson, Chris Worrall

Abstract:

With the rise of public awareness on climate change comes an increasing demand for renewable sources of energy. As a result, the wind power sector is striving to manufacture longer, more efficient and reliable wind turbine blades. Currently, one of the leading causes of blade failure in service is improper cure of the resin during manufacture. The infusion process creating the main part of the composite blade structure remains a critical step that is yet to be monitored in real time. This stage consists of a viscous resin being drawn into a mould under vacuum, then undergoing a curing reaction until solidification. Successful infusion assumes the resin fills all the voids and cures completely. Given that the electrical properties of the resin change significantly during its solidification, both the filling of the mould and the curing reaction are susceptible to be followed using dieletrometry. However, industrially available dielectrics sensors are currently too small to monitor the entire surface of a wind turbine blade. The aim of the present research project is to scale up the dielectric sensor technology and develop a device able to monitor the manufacturing process of large composite structures, assessing the conformity of the blade before it even comes out of the mould. An array of flat copper wires acting as electrodes are embedded in a polymer matrix fixed in an infusion mould. A multi-frequency analysis from 1 Hz to 10 kHz is performed during the filling of the mould with an epoxy resin and the hardening of the said resin. By following the variations of the complex admittance Y*, the filling of the mould and curing process are monitored. Results are compared to numerical simulations of the sensor in order to validate a virtual cure-monitoring system. The results obtained by drawing glycerol on top of the copper sensor displayed a linear relation between the wetted length of the sensor and the complex admittance measured. Drawing epoxy resin on top of the sensor and letting it cure at room temperature for 24 hours has provided characteristic curves obtained when conventional interdigitated sensor are used to follow the same reaction. The response from the developed sensor has shown the different stages of the polymerization of the resin, validating the geometry of the prototype. The model created and analysed using COMSOL has shown that the dielectric cure process can be simulated, so long as a sufficient time and temperature dependent material properties can be determined. The model can be used to help design larger sensors suitable for use with full-sized blades. The preliminary results obtained with the sensor prototype indicate that the infusion and curing process of an epoxy resin can be followed with the chosen configuration on a scale of several decimeters. Further work is to be devoted to studying the influence of the sensor geometry and the infusion parameters on the results obtained. Ultimately, the aim is to develop a larger scale sensor able to monitor the flow and cure of large composite panels industrially.

Keywords: composite manufacture, dieletrometry, epoxy, resin infusion, wind turbine blades

Procedia PDF Downloads 164
24236 Investigating the Demand of Short-Shelf Life Food Products for SME Wholesalers

Authors: Yamini Raju, Parminder S. Kang, Adam Moroz, Ross Clement, Alistair Duffy, Ashley Hopwell

Abstract:

Accurate prediction of fresh produce demand is one the challenges faced by Small Medium Enterprise (SME) wholesalers. Current research in this area focused on limited number of factors specific to a single product or a business type. This paper gives an overview of the current literature on the variability factors used to predict demand and the existing forecasting techniques of short shelf life products. It then extends it by adding new factors and investigating if there is a time lag and possibility of noise in the orders. It also identifies the most important factors using correlation and Principal Component Analysis (PCA).

Keywords: demand forecasting, deteriorating products, food wholesalers, principal component analysis, variability factors

Procedia PDF Downloads 516
24235 Building a Transformative Continuing Professional Development Experience for Educators through a Principle-Based, Technological-Driven Knowledge Building Approach: A Case Study of a Professional Learning Team in Secondary Education

Authors: Melvin Chan, Chew Lee Teo

Abstract:

There has been a growing emphasis in elevating the teachers’ proficiency and competencies through continuing professional development (CPD) opportunities. In this era of a Volatile, Uncertain, Complex, Ambiguous (VUCA) world, teachers are expected to be collaborative designers, critical thinkers and creative builders. However, many of the CPD structures are still revolving in the model of transmission, which stands in contradiction to the cultivation of future-ready teachers for the innovative world of emerging technologies. This article puts forward the framing of CPD through a Principle-Based, Technological-Driven Knowledge Building Approach grounded in the essence of andragogy and progressive learning theories where growth is best exemplified through an authentic immersion in a social/community experience-based setting. Putting this Knowledge Building Professional Development Model (KBPDM) in operation via a Professional Learning Team (PLT) situated in a Secondary School in Singapore, research findings reveal that the intervention has led to a fundamental change in the learning paradigm of the teachers, henceforth equipping and empowering them successfully in their pedagogical design and practices for a 21st century classroom experience. This article concludes with the possibility in leveraging the Learning Analytics to deepen the CPD experiences for educators.

Keywords: continual professional development, knowledge building, learning paradigm, principle-based

Procedia PDF Downloads 128
24234 Social Norms around Adolescent Girls’ Marriage Practices in Ethiopia: A Qualitative Exploration

Authors: Dagmawit Tewahido

Abstract:

Purpose: This qualitative study was conducted to explore social norms around adolescent girls’ marriage practices in West Hararghe, Ethiopia, where early marriage is prohibited by law. Methods: Twenty Focus Group Discussions were conducted with Married and Unmarried adolescent girls, adolescent boys and parents of girls using locally developed vignettes. A total of 32 in-depth interviews were conducted with married and unmarried adolescent girls, husbands of adolescent girls and mothers-in-law. Key informant interviews were conducted with 36 district officials. Data analysis was assisted by Open Code computer software. The Social Norms Analysis Plot (SNAP) framework developed by CARE guided the development and analysis of vignettes. A thematic data analysis approach was utilized to summarize the data. Results: Early marriage is seen as a positive phenomenon in our study context, and girls who are not married by the perceived ideal age of 15 are socially sanctioned. They are particularly influenced by their peers to marry. Marrying early is considered a chance given by God and a symbol of good luck. The two common types of marriage are decided: 1) by adolescent girl and boy themselves without seeking parental permission (’Jalaa-deemaa’- meaning ‘to go along’), and 2) by just informing girl’s parents (‘Cabsaa’- meaning ‘to break the culture’). Relatives and marriage brokers also arrange early marriages. Girls usually accept the first marriage proposal regardless of their age. Parents generally tend not to oppose marriage arrangements chosen by their daughters. Conclusions: In the study context social norms encourage early marriage despite the existence of a law prohibiting marriage before the age of eighteen years. Early marriage commonly happens through consensual arrangements between adolescent girls and boys. Interventions to reduce early marriage need to consider the influence of Reference Groups on the decision makers for marriages, especially girls’ own peers.

Keywords: adolescent girls, social norms, early marriage, Ethiopia

Procedia PDF Downloads 137
24233 Assay of Formulation of Fresh Cheese Using Lemon and Orange Juices as Clotting Agents

Authors: F. Bouchouka, S. Benamara

Abstract:

The present work is an attempt to prepare a fresh cheese using lemon juice and lemon juice / orange juice mixture as acidifying / clotting agents. A reference cheese is obtained by acidification with commercial vinegar. The analysis performed on the final product (fat, cheese yield, sensory analysis, rheological and bacteriological properties) confirmed the technical feasibility of a natural cheese, using a lemon juice and / or lemon juice / orange juice mixture as acidifying / clotting agents. In addition, a general acceptance test allowed to select the cheese sample acidified with lemon juice as the best, compared to the two other samples (lemon juice/orange juice acidification and commercial vinegar acidification).

Keywords: clotting agent, fresh cheese, juice, lemon, orange

Procedia PDF Downloads 248
24232 Artificial Intelligence in Vietnamese Higher Education: Benefits, Challenges and Ethics

Authors: Duong Van Thanh

Abstract:

Artificial Intelligence (AI) has been recently a new trend in Higher Education systems globally as well as in the Vietnamese Higher Education. This study explores the benefits and challenges in applications of AI in 02 selected universities, ie. Vietnam National Universities in Hanoi Capital and the University of Economics in Ho Chi Minh City. Particularly, this paper focuses on how the ethics of Artificial Intelligence have been addressed among faculty members at these two universities. The AI ethical issues include the access and inclusion, privacy and security, transparency and accountability. AI-powered educational technology has the potential to improve access and inclusion for students with disabilities or other learning needs. However, there is a risk that AI-based systems may not be accessible to all students and may even exacerbate existing inequalities. AI applications can be opaque and difficult to understand, making it challenging to hold them accountable for their decisions and actions. It is important to consider the benefits that adopting AI-systems bring to the institutions, teaching, and learning. And it is equally important to recognize the drawbacks of using AI in education and to take the necessary steps to mitigate any negative impact. The results of this study present a critical concern in higher education in Vietnam, where AI systems may be used to make important decisions about students’ learning and academic progress. The authors of this study attempt to make some recommendation that the AI-system in higher education system is frequently checked by a human in charge to verify that everything is working as it should or if the system needs some retraining or adjustments.

Keywords: artificial intelligence, ethics, challenges, vietnam

Procedia PDF Downloads 118
24231 A Green Analytical Curriculum for Renewable STEM Education

Authors: Mian Jiang, Zhenyi Wu

Abstract:

We have incorporated green components into existing analytical chemistry curriculum with the aims to present a more environment benign approach in both teaching laboratory and undergraduate research. These include the use of cheap, sustainable, and market-available material; minimized waste disposal, replacement of non-aqueous media; and scale-down in sample/reagent consumption. Model incorporations have covered topics in quantitative chemistry as well as instrumental analysis, lower division as well as upper level, and research in traditional titration, spectroscopy, electrochemical analysis, and chromatography. The green embedding has made chemistry more daily life relevance, and application focus. Our approach has the potential to expand into all STEM fields to make renewable, high-impact education experience for undergraduate students.

Keywords: green analytical chemistry, pencil lead, mercury, renewable

Procedia PDF Downloads 337
24230 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 64
24229 Designing a Cricket Team Selection Method Using Super-Efficient DEA and Semi Variance Approach

Authors: Arnab Adhikari, Adrija Majumdar, Gaurav Gupta, Arnab Bisi

Abstract:

Team formation plays an instrumental role in the sports like cricket. Existing literature reveals that most of the works on player selection focus only on the players’ efficiency and ignore the consistency. It motivates us to design an improved player selection method based on both player’s efficiency and consistency. To measure the players’ efficiency measurement, we employ a modified data envelopment analysis (DEA) technique namely ‘super-efficient DEA model’. We design a modified consistency index based on semi variance approach. Here, we introduce a new parameter called ‘fitness index’ for consistency computation to assess a player’s fitness level. Finally, we devise a single performance score using both efficiency score and consistency score with the help of a linear programming model. To test the robustness of our method, we perform a rigorous numerical analysis to determine the all-time best One Day International (ODI) Cricket XI. Next, we conduct extensive comparative studies regarding efficiency scores, consistency scores, selected team between the existing methods and the proposed method and explain the rationale behind the improvement.

Keywords: decision support systems, sports, super-efficient data envelopment analysis, semi variance approach

Procedia PDF Downloads 395
24228 Chemical Analysis and Cytotoxic Evaluation of Asphodelus Aestivus Brot. Flowers

Authors: Mai M. Farid, Mona El-Shabrawy, Sameh R. Hussein, Ahmed Elkhateeb, El-Said S. Abdel-Hameed, Mona M. Marzouk

Abstract:

Asphodelus aestivus Brot. Is a wild plant distributed in Egypt and is considered one of the five Asphodelus spp. from the family Asphodelaceae; it grows in dry grasslands and on rocky or sandy soil. The chemical components of A. aestivus flowers extract were analyzed using different chromatographic and spectral techniques and led to the isolation of two anthraquinones identified as emodin and emodin-O-glucoside. In addition to, five flavonoid compounds;kaempferol,Kaempferol-3-O-glucoside,Apigenin-6-C-glucoside-7-O-glucoside (Saponarine), luteolin 7-O-β-glucopyranoside, Isoorientin-O-malic acid which is a new compound in nature. The LC-ESI-MS/MS analysis of the flower extract of A. aestivus led to the identification of twenty- two compounds characterized by the presence of flavones, flavonols, and flavone C-glycosides. While GC/MS analysis led to the identification of 24 compounds comprising 98.32% of the oil, the major components of the oil were 9, 12, 15-Octadecatrieoic acid methyl ester 28.72%, and 9, 12-Octadecadieroic acid (Z, Z)-methyl ester 19.96%. In vitro cytotoxic activity of the aqueous methanol extract of A. aestivus flowers against HEPG2, HCT-116, MCF-7, and A549 culture was examined and showed moderate inhibition (62.3±1.1)% on HEPG2 cell line followed by (36.8±0.2)% inhibition on HCT-116 and a weak inhibition (5.7± 0.0.2) on MCF-7 cell line followed by (4.5± 0.4) % inhibition on A549 cell line and this is considered the first cytotoxic report of A. aestivus flowers.

Keywords: Anthraquinones, Asphodelus aestivus, Cytotoxic activity, Flavonoids, LC-ESI-MS/MS

Procedia PDF Downloads 218
24227 Impact of Keeping Drug-Addicted Mothers and Newborns Together: Enhancing Bonding, Interoception Learning, and Thriving for Newborns with Positive Effects on Attachment and Child Development

Authors: Poteet Frances, Glovinski Ira

Abstract:

INTRODUCTION: The interoceptive nervous system continuously senses chemical and anatomical changes and helps you recognize, understand, and feel what’s going on inside your body so it is important for energy regulation, memory, affect, and sense of self. A newborn needs predictable routines rather than confusion/chaos to make connections between internal experiences and emotions. AIM: Current legal protocols of removing babies from drug-addicted mothers impact the critical window of bonding. The newborn’s brain is social and the attachment process influences a child’s development which begins immediately after birth through nourishment, comfort, and protection. DESCRIPTION: Our project aims to educate drug-addicted mothers, and medical, nursing, and social work professionals on interoceptive concepts and practices to sustain the mother/newborn relationship. A mother’s interoceptive knowledge predicts children’s emotion regulation and social skills in middle childhood. CONCLUSION: When mothers develop an awareness of their inner bodily sensations, they can self-regulate and be emotionally available to co-regulate (support their newborn during distressing emotions and sensations). Our project has enhanced relationship preservation (mothers understand how their presence matters) and the overall mother/newborn connection.

Keywords: drug-addiction, interoception, legal, mothers, newborn, self-regulation

Procedia PDF Downloads 58
24226 Combining Shallow and Deep Unsupervised Machine Learning Techniques to Detect Bad Actors in Complex Datasets

Authors: Jun Ming Moey, Zhiyaun Chen, David Nicholson

Abstract:

Bad actors are often hard to detect in data that imprints their behaviour patterns because they are comparatively rare events embedded in non-bad actor data. An unsupervised machine learning framework is applied here to detect bad actors in financial crime datasets that record millions of transactions undertaken by hundreds of actors (<0.01% bad). Specifically, the framework combines ‘shallow’ (PCA, Isolation Forest) and ‘deep’ (Autoencoder) methods to detect outlier patterns. Detection performance analysis for both the individual methods and their combination is reported.

Keywords: detection, machine learning, deep learning, unsupervised, outlier analysis, data science, fraud, financial crime

Procedia PDF Downloads 92
24225 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches

Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.

Abstract:

A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.

Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency

Procedia PDF Downloads 142
24224 Screening of the Sunflower Genotypes for Drought Stress at Seedling Stage by Polyethylene Glycol under Laboratory Conditions

Authors: Uzma Ayaz, Sanam Bashir, Shahid Iqbal Awan, Muhammad Ilyas, Muhammad Fareed Khan

Abstract:

Drought stress directly affects growth along with the productivity of plants by altering plant water status. Sunflower (Helianthus annuus L.), an oilseed crop, is adversely affected by abiotic stresses. The present study was carried out to characterize the genetic variability for seedling and morpho-physiological parameters in different sunflower genotypes under water-stressed conditions. A total of twenty-seven genotypes, including two hybrids, eight advanced lines and seventeen accessions of sunflower (Helianthus annuus L.) were tested against drought stress at Seedling stages by Polyethylene glycol (PEG). Significant means were calculated among traits using analysis of variance (ANOVA) whereas, correlation and principal component analysis also confirmed that germination percentage, root length, shoot length, chlorophyll content, stomatal frequency are positively linked with each other hence, these traits were responsible for most of the variation among genotypes. The cluster analysis results showed that genotypes Ausun, line-3, line-2, and 17578, line-1, line-7, line-6 and 17562 as more diverse among all the genotypes. These most divergent genotypes could be utilized in the development of drought-tolerant inbreed lines which could be subsequently used in future heterosis breeding programs.

Keywords: sunflower, drought, stress, polyethylene- glycol, screening

Procedia PDF Downloads 118
24223 Crops Cold Stress Alleviation by Silicon: Application on Turfgrass

Authors: Taoufik Bettaieb, Sihem Soufi

Abstract:

As a bioactive metalloid, silicon (Si) is an essential element for plant growth and development. It also plays a crucial role in enhancing plants’ resilience to different abiotic and biotic stresses. The morpho-physiological, biochemical, and molecular background of Si-mediated stress tolerance in plants were unraveled. Cold stress is a severe abiotic stress response to the decrease of plant growth and yield by affecting various physiological activities in plants. Several approaches have been used to alleviate the adverse effects generated from cold stress exposure, but the cost-effective, environmentally friendly, and defensible approach is the supply of silicon. Silicon has the ability to neutralize the harmful impacts of cold stress. Therefore, based on these hypotheses, this study was designed in order to investigate the morphological and physiological background of silicon effects applied at different concentrations on cold stress mitigation during early growth of a turfgrass, namely Paspalum vaginatum Sw. Results show that silicon applied at different concentrations improved the morphological development of Paspalum subjected to cold stress. It is also effective on the photosynthetic apparatus by maintaining stability the photochemical efficiency. As the primary component of cellular membranes, lipids play a critical function in maintaining the structural integrity of plant cells. Silicon application decreased membrane lipid peroxidation and kept on membrane frontline barrier relatively stable under cold stress.

Keywords: crops, cold stress, silicon, abiotic stress

Procedia PDF Downloads 120
24222 Examining How Teachers’ Backgrounds and Perceptions for Technology Use Influence on Students’ Achievements

Authors: Zhidong Zhang, Amanda Resendez

Abstract:

This study is to examine how teachers’ perspective on education technology use in their class influence their students’ achievement. The authors hypothesized that teachers’ perspective can directly or indirectly influence students’ learning, performance, and achievements. In this study, a questionnaire entitled, Teacher’s Perspective on Educational Technology, was delivered to 63 teachers and 1268 students’ mathematics and reading achievement records were collected. The questionnaire consists of four parts: a) demographic variables, b) attitudes on technology integration, c) outside factor affecting technology integration, and d) technology use in the classroom. Kruskal-Wallis and hierarchical regression analysis techniques were used to examine: 1) the relationship between the demographic variables and teachers’ perspectives on educational technology, and 2) how the demographic variables were causally related to students’ mathematics and reading achievements. The study found that teacher demographics were significantly related to the teachers’ perspective on educational technology with p < 0.05 and p < 0.01 separately. These teacher demographical variables included the school district, age, gender, the grade currently teach, teaching experience, and proficiency using new technology. Further, these variables significantly predicted students’ mathematics and reading achievements with p < 0.05 and p < 0.01 separately. The variations of R² are between 0.176 and 0.467. That means 46.7% of the variance of a given analysis can be explained by the model.

Keywords: teacher's perception of technology use, mathematics achievement, reading achievement, Kruskal-Wallis test, hierarchical regression analysis

Procedia PDF Downloads 126
24221 Removal and/or Recovery of Phosphates by Precipitation as Ferric Phosphate from the Effluent of a Municipal Wastewater Treatment Plant

Authors: Kyriaki Kalaitzidou, Athanasia Tolkou, Christina Raptopoulou, Manassis Mitrakas, Anastasios Zouboulis

Abstract:

Phosphate rock is the main source of phosphorous (P) in fertilizers and is essential for high crop yield in agriculture; currently, it is considered as a critical element, phasing scarcity. Chemical precipitation, which is a commonly used method of phosphorous removal from wastewaters, finds its significance in that phosphates may be precipitated in appropriate chemical forms that can be reused-recovered. Most often phosphorous is removed from wastewaters in the form of insoluble phosphate salts, by using salts (coagulants) of multivalent metal ions, most frequently iron, aluminum, calcium, or magnesium. The removal degree is affected by various factors, such as pH, chemical agent dose, temperature, etc. In this study, phosphate precipitation from the secondary (biologically treated) effluent of a municipal wastewater treatment plant is examined. Using chlorosulfate (FeClSO4) it was attempted to either remove and/or recover PO43-. Results showed that the use of Fe3+ can achieve residual concentrations lower than the commonly applied legislation limit of PO43- (i.e. 3 mg PO43-/L) by adding 7.5 mg/L Fe3+ in the secondary effluent with an initial concentration of about 10 mg PO43-/L and at pH range between 6 to 9. In addition, the formed sediment has a percentage of almost 24% PO43- content. Therefore, simultaneous removal and recovery of PO43- as ferric phosphate can be achieved, making it possible for the ferric phosphate to be re-used as a possible (secondary) fertilizer source.

Keywords: ferric phosphate, phosphorus recovery, phosphorus removal, wastewater treatment

Procedia PDF Downloads 480
24220 Mobile Assembly of Electric Vehicles: Decentralized, Low-Invest and Flexible

Authors: Achim Kampker, Kai Kreiskoether, Johannes Wagner, Sarah Fluchs

Abstract:

The growing speed of innovation in related industries requires the automotive industry to adapt and increase release frequencies of new vehicle derivatives which implies a significant reduction of investments per vehicle and ramp-up times. Emerging markets in various parts of the world augment the currently dominating established main automotive markets. Local content requirements such as import tariffs on final products impede the accessibility of these micro markets, which is why in the future market exploitation will not be driven by pure sales activities anymore but rather by setting up local assembly units. The aim of this paper is to provide an overview of the concept of decentralized assembly and to discuss and critically assess some currently researched and crucial approaches in production technology. In order to determine the scope in which complementary mobile assembly can be profitable for manufacturers, a general cost model is set up and each cost driver is assessed with respect to varying levels of decentralization. One main result of the paper is that the presented approaches offer huge cost-saving potentials and are thus critical for future production strategies. Nevertheless, they still need to be further exploited in order for decentralized assembly to be profitable for companies. The optimal level of decentralization must, however, be specifically determined in each case and cannot be defined in general.

Keywords: automotive assembly, e-mobility, production technology, release capability, small series assembly

Procedia PDF Downloads 197
24219 Isolate-Specific Variations among Clinical Isolates of Brucella Identified by Whole-Genome Sequencing, Bioinformatics and Comparative Genomics

Authors: Abu S. Mustafa, Mohammad W. Khan, Faraz Shaheed Khan, Nazima Habibi

Abstract:

Brucellosis is a zoonotic disease of worldwide prevalence. There are at least four species and several strains of Brucella that cause human disease. Brucella genomes have very limited variation across strains, which hinder strain identification using classical molecular techniques, including PCR and 16 S rDNA sequencing. The aim of this study was to perform whole genome sequencing of clinical isolates of Brucella and perform bioinformatics and comparative genomics analyses to determine the existence of genetic differences across the isolates of a single Brucella species and strain. The draft sequence data were generated from 15 clinical isolates of Brucella melitensis (biovar 2 strain 63/9) using MiSeq next generation sequencing platform. The generated reads were used for further assembly and analysis. All the analysis was performed using Bioinformatics work station (8 core i7 processor, 8GB RAM with Bio-Linux operating system). FastQC was used to determine the quality of reads and low quality reads were trimmed or eliminated using Fastx_trimmer. Assembly was done by using Velvet and ABySS softwares. The ordering of assembled contigs was performed by Mauve. An online server RAST was employed to annotate the contigs assembly. Annotated genomes were compared using Mauve and ACT tools. The QC score for DNA sequence data, generated by MiSeq, was higher than 30 for 80% of reads with more than 100x coverage, which suggested that data could be utilized for further analysis. However when analyzed by FastQC, quality of four reads was not good enough for creating a complete genome draft so remaining 11 samples were used for further analysis. The comparative genome analyses showed that despite sharing same gene sets, single nucleotide polymorphisms and insertions/deletions existed across different genomes, which provided a variable extent of diversity to these bacteria. In conclusion, the next generation sequencing, bioinformatics, and comparative genome analysis can be utilized to find variations (point mutations, insertions and deletions) across different genomes of Brucella within a single strain. This information could be useful in surveillance and epidemiological studies supported by Kuwait University Research Sector grants MI04/15 and SRUL02/13.

Keywords: brucella, bioinformatics, comparative genomics, whole genome sequencing

Procedia PDF Downloads 376
24218 Trends, Status, and Future Directions of Artificial Intelligence in Human Resources Disciplines: A Bibliometric Analysis

Authors: Gertrude I. Hewapathirana, Loi A. Nguyen, Mohammed M. Mostafa

Abstract:

Artificial intelligence (AI) technologies and tools are swiftly integrating into many functions of all organizations as a competitive drive to enhance innovations, productivity, efficiency, faster and precise decision making to keep up with rapid changes in the global business arena. Despite increasing research on AI technologies in production, manufacturing, and information management, AI in human resource disciplines is still lagging. Though a few research studies on HR informatics, recruitment, and HRM in general, how to integrate AI in other HR functional disciplines (e.g., compensation, training, mentoring and coaching, employee motivation) is rarely researched. Many inconsistencies of research hinder developing up-to-date knowledge on AI in HR disciplines. Therefore, exploring eight research questions, using bibliometric network analysis combined with a meta-analysis of published research literature. The authors attempt to generate knowledge on the role of AI in improving the efficiency of HR functional disciplines. To advance the knowledge for the benefit of researchers, academics, policymakers, and practitioners, the study highlights the types of AI innovations and outcomes, trends, gaps, themes and topics, fast-moving disciplines, key players, and future directions.AI in HR informatics in high tech firms is the dominant theme in many research publications. While there is increasing attention from researchers and practitioners, there are many gaps between the promise, potential, and real AI applications in HR disciplines. A higher knowledge gap raised many unanswered questions regarding legal, ethical, and morale aspects of AI in HR disciplines as well as the potential contributions of AI in HR disciplines that may guide future research directions. Though the study provides the most current knowledge, it is limited to peer-reviewed empirical, theoretical, and conceptual research publications stored in the WoS database. The implications for theory, practice, and future research are discussed.

Keywords: artificial intelligence, human resources, bibliometric analysis, research directions

Procedia PDF Downloads 95
24217 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems

Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu

Abstract:

In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.

Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP

Procedia PDF Downloads 32
24216 Generalized Limit Equilibrium Solution for the Lateral Pile Capacity Problem

Authors: Tomer Gans-Or, Shmulik Pinkert

Abstract:

The determination of lateral pile capacity per unit length is a key aspect in geotechnical engineering. Traditional approaches for assessing piles lateral capacity in cohesive soils involve the application of upper-bound and lower-bound plasticity theorems. However, a comprehensive solution encompassing the entire spectrum of soil strength parameters, particularly in frictional soils with or without cohesion, is still lacking. This research introduces an innovative implementation of the slice method limit equilibrium solution for lateral capacity assessment. For any given numerical discretization of the soil's domain around the pile, the lateral capacity evaluation is based on mobilized strength concept. The critical failure geometry is then found by a unique optimization procedure which includes both factor of safety minimization and geometrical optimization. The robustness of this suggested methodology is that the solution is independent of any predefined assumptions. Validation of the solution is accomplished through a comparison with established plasticity solutions for cohesive soils. Furthermore, the study demonstrates the applicability of the limit equilibrium method to address unresolved cases related to frictional and cohesive-frictional soils. Beyond providing capacity values, the method enables the utilization of the mobilized strength concept to generate safety-factor distributions for scenarios representing pre-failure states.

Keywords: lateral pile capacity, slice method, limit equilibrium, mobilized strength

Procedia PDF Downloads 59