Search results for: repeated measures MANOVA
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4059

Search results for: repeated measures MANOVA

9 Top Skills That Build Cultures at Organizations

Authors: Priyanka Botny Srinath, Alessandro Suglia, Mel McKendrick

Abstract:

Background: Organizational cultural studies integrate sociology and anthropology, portraying man as a creator of symbols, languages, beliefs, and ideologies -essentially, a creator and manager of meaning. In our research, we leverage analytical measures to discern whether an organization embodies a singular culture or a myriad of subcultures. Fast-forward to 2023, our research thesis focuses on digitally measuring culture, coining it as the "Work Culture Quotient." This entails conceptually mapping common experiential patterns to provide executives insights into the digital organization journey, aiding in understanding their current position and identifying future steps. Objectives: Finding the new age skills that help in defining the culture; understand the implications of post-COVID effects; derive a digital framework for measuring skillsets. Method: We conducted two comprehensive Delphi studies to distill essential insights. Delphi 1: Through a thematic analysis of interviews with 20 high-level leaders representing companies across diverse regions -India, Japan, the US, Canada, Morocco, and Uganda- we identified 20 key skills critical for cultivating a robust organizational culture. The skills are -influence, self-confidence, optimism, empathy, leadership, collaboration and cooperation, developing others, commitment, innovativeness, leveraging diversity, change management, team capabilities, self-control, digital communication, emotional awareness, team bonding, communication, problem solving, adaptability, and trustworthiness. Delphi 2: Subject matter experts were asked to complete a questionnaire derived from the thematic analysis in stage 1 to formalise themes and draw consensus amongst experts on the most important workplace skills. Results: The thematic analysis resulted in 20 workplace employee skills being identified. These skills were all included in the Delphi round 2 questionnaire. From the outputs, we analysed the data using R Studio for arriving at agreement and consensus, we also used sum of squares method to compare various agreements to extract various themes with a threshold of 80% agreements. This yielded three themes at over 80% agreement (leadership, collaboration and cooperation, communication) and three further themes at over 60% agreement (commitment, empathy, trustworthiness). From this, we selected five questionnaires to be included in the primary data collection phase, and these will be paired with the digital footprints to provide a workplace culture quotient. Implications: The findings from these studies bear profound implications for decision-makers, revolutionizing their comprehension of organizational culture. Tackling the challenge of mapping the digital organization journey involves innovative methodologies that probe not only external landscapes but also internal cultural dynamics. This holistic approach furnishes decision-makers with a nuanced understanding of their organizational culture and visualizes pivotal skills for employee growth. This clarity enables informed choices resonating with the organization's unique cultural fabric. Anticipated outcomes transcend mere individual cultural measurements, aligning with organizational goals to unveil a comprehensive view of culture, exposing artifacts and depth. Armed with this profound understanding, decision-makers gain tangible evidence for informed decision-making, strategically leveraging cultural strengths to cultivate an environment conducive to growth, innovation, and enduring success, ultimately leading to measurable outcomes.

Keywords: leadership, cooperation, collaboration, teamwork, work culture

Procedia PDF Downloads 18
8 Effectiveness of Peer Reproductive Health Education Program in Improving Knowledge, Attitude, and Use Health Service of High School Adolescent Girls in Eritrea in 2014

Authors: Ghidey Ghebreyohanes, Eltahir Awad Gasim Khalil, Zemenfes Tsighe, Faiza Ali

Abstract:

Background: reproductive health (RH) is a state of physical, mental and social well-being in all matters relating to the reproductive system at all stages of life. In East Africa including Eritrea, adolescents comprise more than a quarter of the population. The region holds the highest rates of sexually transmitted diseases, HIV, unwanted pregnancy and unsafe abortion with its complications. Young girls carry the highest burden of reproductive health problems due to their risk taking behavior, lack of knowledge, peer pressure, physiologic immaturity and low socioeconomic status. Design: this was a Community-based, randomized, case-controlled and pre-test-post-test intervention study. Setting: Zoba Debub was randomly selected out of the six zobas in Eritrea. The four high schools out of the 26 in Zoba Debub were randomly selected as study target schools. Over three quarter of the people live on farming. The target population was female students attending grade nine with majority of these girls live in the distant villages and walk to school. The study participants were randomly selected (n=165) from each school. Furthermore, the 1 intervention and 3 controls for the study arms were assigned randomly. Objectives: this study aimed to assess the effectiveness of peer reproductive health education in improving knowledge, attitude, and health service use of high school adolescent girls in Eritrea Methods: the protocol was reviewed and approved by the Scientific and Ethics Committees of Faculty of Nursing Sciences, University of Khartoum. Data was collected using pre-designed and pretested questionnaire emphasizing on reproductive health knowledge, attitude and practice. Sample size was calculated using proportion formula (α 0.01; power of 95%). Measures used were scores and proportions. Descriptive and inferential statistics, t-test and chi square at (α .01), 99% confidence interval were used to compare changes of pre and post-intervention scores using SPSS soft ware. Seventeen students were selected for peer educators by the school principals and other teachers based on inclusion criteria that include: good academic performance and acceptable behavior. One peer educator educated one group composed of 8-10 students for two months. One faculty member was selected to supervise peer educators. The principal investigator conducted the training of trainers and provided supervision and discussion to peer educators every two weeks until the end of intervention. Results: following informed consent, 627 students [164 in intervention and 463 in the control group] with a ratio of 1 to 3, were enrolled in the study. The mean age for the total study population was 15.4±1.0 years. The intervention group mean age was 15.3±1.0 year; while the control group had a mean age of 15.4±1.0. The mean ages for the study arms were similar (p= 0.4). The majority (96 %) of the study participants are from Tigrigna ethnic group. Reproductive knowledge scores which was calculated out of a total 61 grade points: intervention group (pretest 6.7 %, post-test 33.6 %; p= 0.0001); control group (pretest 7.3 %, posttest 7.3 %, p= 0.92). Proportion difference in attitude calculated out of 100%: intervention group (pretest 42.3 % post test 54.7% p= 0.001); controls group (pretest 45%, post test 44.8 p= 0.7). Proportion difference in Practice calculated out of 100 %: intervention group (pretest 15.4%, post test 80.4 % p= 0.0001); control group (pretest 16.8%, posttest 16.9 % p= 0.8). Mothers were quoted as major (> 90 %) source of reproductive health information. All focus group discussants and most of survey participants agreed on the urgent need of reproductive health information and services for adolescent girls. Conclusion: reproductive health knowledge and use of facilities is poor among adolescent girls in sub-urban Eretria. School-based peer reproductive health education is effective and is the best strategy to improve reproductive health knowledge and attitudes.

Keywords: reproductive health, adolescent girls, eretria, health education

Procedia PDF Downloads 328
7 Non-Thermal Pulsed Plasma Discharge for Contaminants of Emerging Concern Removal in Water

Authors: Davide Palma, Dimitra Papagiannaki, Marco Minella, Manuel Lai, Rita Binetti, Claire Richard

Abstract:

Modern analytical technologies allow us to detect water contaminants at trace and ultra-trace concentrations highlighting how a large number of organic compounds is not efficiently abated by most wastewater treatment facilities relying on biological processes; we usually refer to these micropollutants as contaminants of emerging concern (CECs). The availability of reliable end effective technologies, able to guarantee the high standards of water quality demanded by legislators worldwide, has therefore become a primary need. In this context, water plasma stands out among developing technologies as it is extremely effective in the abatement of numerous classes of pollutants, cost-effective, and environmentally friendly. In this work, a custom-built non-thermal pulsed plasma discharge generator was used to abate the concentration of selected CECs in the water samples. Samples were treated in a 50 mL pyrex reactor using two different types of plasma discharge occurring at the surface of the treated solution or, underwater, working with positive polarity. The distance between the tips of the electrodes determined where the discharge was formed: underwater when the distance was < 2mm, at the water surface when the distance was > 2 mm. Peak voltage was in the 100-130kV range with typical current values of 20-40 A. The duration of the pulse was 500 ns, and the frequency of discharge could be manually set between 5 and 45 Hz. Treatment of 100 µM diclofenac solution in MilliQ water, with a pulse frequency of 17Hz, revealed that surface discharge was more efficient in the degradation of diclofenac that was no longer detectable after 6 minutes of treatment. Over 30 minutes were required to obtain the same results with underwater discharge. These results are justified by the higher rate of H₂O₂ formation (21.80 µmolL⁻¹min⁻¹ for surface discharge against 1.20 µmolL⁻¹min⁻¹ for underwater discharge), larger discharge volume and UV light emission, high rate of ozone and NOx production (up to 800 and 1400 ppb respectively) observed when working with surface discharge. Then, the surface discharge was used for the treatment of the three selected perfluoroalkyl compounds, namely, perfluorooctanoic acid (PFOA), perfluorohexanoic acid (PFHxA), and pefluorooctanesulfonic acid (PFOS) both individually and in mixture, in ultrapure and groundwater matrices with initial concentration of 1 ppb. In both matrices, PFOS exhibited the best degradation reaching complete removal after 30 min of treatment (degradation rate 0.107 min⁻¹ in ultrapure water and 0.0633 min⁻¹ in groundwater), while the degradation rate of PFOA and PFHxA was slower of around 65% and 80%, respectively. Total nitrogen (TN) measurements revealed levels up to 45 mgL⁻¹h⁻¹ in water samples treated with surface discharge, while, in analogous samples treated with underwater discharge, TN increase was 5 to 10 times lower. These results can be explained by the significant NOx concentrations (over 1400 ppb) measured above functioning reactor operating with superficial discharge; rapid NOx hydrolysis led to nitrates accumulation in the solution explaining the observed evolution of TN values. Ionic chromatography measures confirmed that the vast majority of TN was under the form of nitrates. In conclusion, non-thermal pulsed plasma discharge, obtained with a custom-built generator, was proven to effectively degrade diclofenac in water matrices confirming the potential interest of this technology for wastewater treatment. The surface discharge was proven to be more effective in CECs removal due to the high rate of formation of H₂O₂, ozone, reactive radical species, and strong UV light emission. Furthermore, nitrates enriched water obtained after treatment could be an interesting added-value product to be used as fertilizer in agriculture. Acknowledgment: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 765860.

Keywords: CECs removal, nitrogen fixation, non-thermal plasma, water treatment

Procedia PDF Downloads 91
6 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context

Authors: Anjula Negi, Sanjay Gupta

Abstract:

Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.

Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy

Procedia PDF Downloads 54
5 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”

Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid

Abstract:

Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.

Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps

Procedia PDF Downloads 456
4 The Reasons for Food Losses and Waste and the Trends of Their Management in Basic Vegetal Production in Poland

Authors: Krystian Szczepanski, Sylwia Łaba

Abstract:

Production of fruit and vegetables, food cereals or oilseeds affects the natural environment via intake of nutrients being contained in the soil, use of the resources of water, fertilizers and food protection products, and energy. The limitation of the mentioned effects requires the introduction of techniques and methods for cultivation being friendly to the environment and counteracting losses and waste of agricultural raw materials as well as the appropriate management of food waste in every stage of the agri-food supply chain. The link to basic production includes obtaining a vegetal raw material and its storage in agricultural farm and transport to a collecting point. When the plants are ready to be harvested is the initial point; the stage before harvesting is not considered in the system of measuring and monitoring the food losses. The moment at which the raw material enters the stage of processing, i.e., its receipt at the gate of the processing plant, is considered as a final point of basic production. According to the Regulation (EC) No 178/2002 of the European Parliament and of the Council of 28 January 2002, Art. 2, “food” means any substance or product, intended to be, or reasonably expected to be consumed by humans. For the needs of the studies and their analysis, it was determined when raw material is considered as food – the plants (fruit, vegetables, cereals, oilseeds), after being harvested, arrive at storehouses. The aim of the studies was to determine the reasons for loss generation and to analyze the directions of their management in basic vegetal production in Poland in the years 2017 and 2018. The studies on food losses and waste in basic vegetal production were carried out in three sectors – fruit and vegetables, cereals and oilseeds. The studies of the basic production were conducted during the period of March-May 2019 at the territory of the whole country on a representative trail of 250 farms in each sector. The surveys were carried out using the questionnaires by the PAP method; the pollsters conducted the direct questionnaire interviews. From the conducted studies, it is followed that in 19% of the examined farms, any losses were not recorded during preparation, loading, and transport of the raw material to the manufacturing plant. In the farms, where the losses were indicated, the main reason in production of fruit and vegetables was rotting and it constituted more than 20% of the reported reasons, while in the case of cereals and oilseeds’ production, the respondents identified damages, moisture and pests as the most frequent reason. The losses and waste, generated in vegetal production as well as in processing and trade of fruit and vegetables, or cereal products should be appropriately managed or recovered. The respondents indicated composting (more than 60%) as the main direction of waste management in all categories. Animal feed and landfill sites were the other indicated directions of management. Prevention and minimization of loss generation are important in every stage of production as well as in basic production. When possessing the knowledge on the reasons for loss generation, we may introduce the preventive measures, mainly connected with the appropriate conditions and methods of the storage. Production of fruit and vegetables, food cereals or oilseeds affects the natural environment via intake of nutrients being contained in the soil, use of the resources of water, fertilizers and food protection products, and energy. The limitation of the mentioned effects requires the introduction of techniques and methods for cultivation being friendly to the environment and counteracting losses and waste of agricultural raw materials as well as the appropriate management of food waste in every stage of the agri-food supply chain. The link to basic production includes obtaining a vegetal raw material and its storage in agricultural farm and transport to a collecting point. The starting point is when the plants are ready to be harvested; the stage before harvesting is not considered in the system of measuring and monitoring the food losses. The successive stage is the transport of the collected crops to the collecting point or its storage and transport. The moment, at which the raw material enters the stage of processing, i.e. its receipt at the gate of the processing plant, is considered as a final point of basic production. Processing is understood as the change of the raw material into food products. According to the Regulation (EC) No 178/2002 of the European Parliament and of the Council of 28 January 2002, Art. 2, “food” means any substance or product, intended to be, or reasonably expected to be consumed by humans. It was determined (for the needs of the present studies) when raw material is considered as a food; it is the moment when the plants (fruit, vegetables, cereals, oilseeds), after being harvested, arrive at storehouses. The aim of the studies was to determine the reasons for loss generation and to analyze the directions of their management in basic vegetal production in Poland in the years 2017 and 2018. The studies on food losses and waste in basic vegetal production were carried out in three sectors – fruit and vegetables, cereals and oilseeds. The studies of the basic production were conducted during the period of March-May 2019 at the territory of the whole country on a representative trail of 250 farms in each sector. The surveys were carried out using the questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted the direct questionnaire interviews. From the conducted studies, it is followed that in 19% of the examined farms, any losses were not recorded during preparation, loading, and transport of the raw material to the manufacturing plant. In the farms, where the losses were indicated, the main reason in production of fruit and vegetables was rotting and it constituted more than 20% of the reported reasons, while in the case of cereals and oilseeds’ production, the respondents identified damages, moisture, and pests as the most frequent reason. The losses and waste, generated in vegetal production as well as in processing and trade of fruit and vegetables, or cereal products should be appropriately managed or recovered. The respondents indicated composting (more than 60%) as the main direction of waste management in all categories. Animal feed and landfill sites were the other indicated directions of management. Prevention and minimization of loss generation are important in every stage of production as well as in basic production. When possessing the knowledge on the reasons for loss generation, we may introduce the preventive measures, mainly connected with the appropriate conditions and methods of the storage. ACKNOWLEDGEMENT The article was prepared within the project: "Development of a waste food monitoring system and an effective program to rationalize losses and reduce food waste", acronym PROM implemented under the STRATEGIC SCIENTIFIC AND LEARNING PROGRAM - GOSPOSTRATEG financed by the National Center for Research and Development in accordance with the provisions of Gospostrateg1 / 385753/1/2018

Keywords: food losses, food waste, PAP method, vegetal production

Procedia PDF Downloads 87
3 Advancing Dialysis Care Access And Health Information Management: A Blueprint For Nairobi Hospital

Authors: Kimberly Winnie Achieng Otieno

Abstract:

Nairobi Hospital plays a pivotal role in healthcare provision in East and Central Africa, yet it faces challenges in providing accessible dialysis care and managing health information efficiently. This paper explores strategic interventions to enhance dialysis care, access and streamline health information management, fostering an integrated and patient-centered healthcare system. Challenges at Nairobi Hospital: The Nairobi Hospital currently grapples with insufficient dialysis machines, resulting in extended turn around time in between dialysis sessions for patients. This issue stems from both staffing bottle necks and infrastructural limitations given our growing demand for renal care services. Paper-based records and fragmented information systems hinder the hospital’s ability to manage health data effectively. A lack of hospital systems integration with other facilities jeopardizes patient care access by posing challenges. These inefficiencies hinder collaborative efforts within the healthcare network. An investment in the expanding Nairobi Hospital dialysis facilities to communities is crucial with the high number of new cases of patients with chronic kidney disease. Setting up satellite clinics that are closer to people who live in areas far from the main hospital will ensure better access. This includes acquiring physical space within the greater Nairobi region, and the incorporation of mobile dialysis units to reach underserved areas. By decentralizing services, Nairobi Hospital can extend its reach and cater to a larger patient population. Community Outreach and Education: Implementing educational programs on kidney health within local communities is vital for early detection and prevention. Collaborating with local leaders and organizations can establish a proactive approach to renal health hence reducing the demand for acute dialysis interventions. it can amplify this effort by expanding Nairobi Hospital’s corporate social responsibility outreach program. Increasing the hospital’s footprint would also require an equal ramp up of staff recruitment. Support for continuous training programs will ensure that healthcare providers stay abreast of evolving practices, contributing to improved patient outcomes and service quality. Streamlining Health Information Management: Fully embracing a shift to 100% Electronic Health Records (EHRs) is a transformative step toward efficient health information management. Customizing these systems to Nairobi Hospital’s specific needs allows for seamless data recording, retrieval, and sharing among healthcare professionals. Doing so will help the hospital guarantee a continuum of care for patients transferring from other facilities. A 100% transition to digital record will also pose its own security threats. Ensuring robust security measures protects patient data and builds trust. Adherence to healthcare data privacy regulations is non-negotiable, and a comprehensive strategy for encryption, access controls, and regular audits should be implemented. Integrating systems to enable interoperability with other healthcare providers facilitates a cohesive healthcare network. Shared information promotes a holistic understanding of patients’ medical history, minimizing redundancies and enhancing overall care quality. Implementation Strategies: To manage the transition to community-based care and EHRs effectively, a phased implementation approach is recommended. Prioritizing dialysis care improvements, at a local level, in the initial stages allows the hospital to address immediate patient needs, followed by the integration of health information management changes. Engaging hospital staff, patients, and local communities is paramount. Collaboration with government agencies, non-governmental organizations (NGOs), and international partners enhances support and resources for successful implementation. Conclusion: By strategically enhancing dialysis care access and streamlining health information management, Nairobi Hospital can strengthen its position as a leading healthcare institution in both East and Central Africa. This comprehensive approach aligns with the hospital’s commitment to providing high-quality, accessible, and patient-centered care in the evolving landscape of healthcare delivery.

Keywords: Africa, urology, diaylsis, healthcare

Procedia PDF Downloads 17
2 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 92
1 The Road Ahead: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Cybersecurity professionals have long been embroiled in a digital arms race, confronting increasingly sophisticated threats with innovative solutions. The field of cybersecurity is in an unending race against malicious adversaries. As threats evolve in complexity, the tools used to defend against them need to advance even faster. Burdened with a vast arsenal of tools and an expansive scope of threat intelligence, analysts frequently navigate a complex web, trying to discern patterns amidst information overload. Herein lies the potential of Retrieval Augmented Generation (RAG). By combining the capabilities of Large Language Models (LLMs) with a generative AI facet, RAG brings to the table an unparalleled ability for real-time cross-referencing, bridging the gap between raw data and actionable insights. Imagine an analyst named Sarah working at a global Fortune 500 company. Every day, Sarah navigates a maze of diverse knowledge bases, real-time threat intelligence, and her company's vast proprietary data, from network specifics to intricate technical blueprints. One day, she's challenged by a potential breach through a personal device due to the company's global "Bring Your Own Device" policy. With the clock ticking, Sarah has mere minutes to trace the malware's origin, all while considering complex regional regulations. As she races against the benchmark of Mean Time To Resolution (MTTR), she wonders: Could "Cozy Bear" with its notorious malware tactic, HAMMERTOSS, be behind this? Balancing policy intricacies, global network considerations, and ever-emerging cyber threats, Sarah's role epitomizes the intense challenges faced by today's cybersecurity analysts. While analysts grapple with this array of intricate, time-sensitive challenges, the necessity for precision and efficiency is key. RAG technology—a cutting-edge advancement in Gen AI—is a promising solution. Designed to assimilate diverse data sources such as cyber advisory notices, phishing email sentiment, secure and insecure code examples, information security policy documentation, and the MITRE ATT&CK framework, RAG equips analysts with real-time querying capabilities through a vector database and a cross referenced concise response from a Gen AI model. Traditional relational databases often necessitate a tedious process of filtering through numerous entries. Now, with the synergy of vector databases and Gen AI models, analysts can rapidly access both contextually or semantically akin data points. This augmented approach equips analysts with a comprehensive understanding of the prevailing cyber threats, elevating the robustness of cybersecurity defenses and upskilling the analyst and team, too. Vector databases underpin the knowledge translation in Gen AI. They bridge the gap between raw data and translation into meaningful insights, ensuring that analysts are equipped with comprehensive and relevant information. This superior capability of the RAG framework, with its impressive depth and precision, finds application across a broad spectrum of cybersecurity challenges. Let's delve into some use cases where its potential becomes particularly evident: Phishing Email Sentiment Analysis: Phishing remains a predominant vector for cybersecurity breaches. Leveraging RAG's capabilities, analysts can not only assess the potential malevolence of an email but can also understand the context behind it. By cross-referencing patterns from varied data sources in real-time, the detection process evolves from a mere content evaluation to a holistic understanding of attacker tactics, behaviors, and evolving profiles. This allows for the identification of nuanced phishing strategies that might otherwise go undetected. Insecure Code Analysis: Software vulnerabilities form a critical entry point for cyber adversaries. With RAG, the process of code evaluation undergoes a transformation. Instead of manual code reviews, the system pulls insights from vector databases and historical code snippets marked as insecure, enabling detection of vulnerabilities based on historical patterns, emerging threat vectors, and even predictive threat modeling. This ensures that even the most obfuscated or embedded vulnerabilities are identified, and corrective measures can be promptly implemented. Vulnerability and Upskill Advisory: In the fast-paced world of cybersecurity, staying updated is paramount. Through RAG's capabilities, analysts are not only made aware of real-time vulnerabilities but are also guided on the necessary skills and tools needed to combat them. By dynamically sourcing data through vulnerability advisories, news on advanced persistent threats, and tactics to defend, RAG ensures that analysts are not only reactive to threats but are also proactively upskilled, thereby bolstering their defense mechanisms. Information Security Policies for Compliance Teams: Compliance remains at the heart of many organizational cybersecurity strategies. However, with ever-shifting regulatory landscapes, staying compliant becomes a moving target. RAG's ability to source real-time data ensures that compliance teams always have access to the latest policy changes, guidelines, and best practices. This not only facilitates adherence to current standards but also anticipates future shifts, assists with audits, and ensures that organizations remain ahead of the compliance curve. Fusing a RAG architecture with platforms like Slack amplifies its practical utility. Slack, known for its real-time communication prowess, seamlessly evolves into more than just a messaging platform in this context. Cybersecurity analysts can pose intricate queries within Slack and, almost instantaneously, receive comprehensive feedback powered by the harmonious interplay of RAG and Gen AI. This integration effectively transforms Slack into an AI-augmented chatbot-like assistant for cybersecurity professionals, always ready to provide informed insights on-demand, making it an indispensable ally in the ever-evolving cyber battlefield. Navigating the vast landscape of cybersecurity, analysts often encounter unfamiliar terminologies and techniques., analysts require tools that not only detect or inform them of threats, like CISA (U.S Cybersecurity Infrastructure Security Agency) Advisories, but also interpret and communicate them effectively. Consider a junior cybersecurity analyst named Alex, who comes across the term "Kerberoasting" while reviewing a network log. Unfamiliar with its intricacies, Alex turns to Slack to pose a query: "chat explain is Kerberoasting, using CISA." Almost instantaneously, Slack, powered by the harmonious interplay of RAG and Gen AI, provides a detailed response, cross-referencing a recent cyber advisory on the technique. It explains how attackers can exploit the Kerberos Ticket Granting Service to decipher service account passwords, potentially compromising a network. In this dynamic realm of cybersecurity, the blend of RAG and Generative AI represents more than just a technological leap. It embodies a paradigm shift, promising a future where human expertise and AI-driven precision join forces. As cyber threats continue their relentless advance, this synergy ensures that defenders are equipped with an arsenal that's not just reactive, but also profoundly insightful. No longer should analysts be submerged in a deluge of data without direction. Instead, they should be empowered, to discern, act, and preempt with unparalleled clarity and confidence. By harmoniously intertwining human discernment with AI capabilities, we should chart a path towards a future where cybersecurity is not just about defense, but about achieving a strategic advantage, paving the way for a safer, informed and a more secure digital horizon.

Keywords: cybersecurity, gen AI, retrieval augmented generation, cybersecurity defense strategies

Procedia PDF Downloads 46