Search results for: action based method
1121 Roads and Agriculture: Impacts of Connectivity in Peru
Authors: Julio Aguirre, Yohnny Campana, Elmer Guerrero, Daniel De La Torre Ugarte
Abstract:
A well-developed transportation network is a necessary condition for a country to derive full benefits from good trade and macroeconomic policies. Road infrastructure plays a key role in the economic development of rural areas of developing countries; where agriculture is the main economic activity. The ability to move agricultural production from the place of production to the market, and then to the place of consumption, greatly influence the economic value of farming activities, and of the resources involved in the production process, i.e., labor and land. Consequently, investment in transportation networks contributes to enhance or overcome the natural advantages or disadvantages that topography and location have imposed over the agricultural sector. This is of particular importance when dealing with countries, like Peru, with a great topographic diversity. The objective of this research is to estimate the impacts of road infrastructure on the performance of the agricultural sector. Specific variables of interest are changes in travel time, shifts of production for self-consumption to production for the market, changes in farmers income, and impacts on the diversification of the agricultural sector. In the study, a cross-section model with instrumental variables is the central methodological instrument. The data is obtained from agricultural and transport geo-referenced databases, and the instrumental variable specification utilized is based on the Kruskal algorithm. The results show that the expansion of road connectivity reduced farmers' travel time by an average of 3.1 hours and the proportion of output sold in the market increases by up to 40 percentage points. The increase in connectivity has an unexpected increase in the districts index of diversification of agricultural production. The results are robust to the inclusion of year and region fixed-effects, and to control for geography (i.e., slope and altitude), population variables, and mining activity. Other results are also very eloquent. For example, a clear positive impact can be seen in access to local markets, but this does not necessarily correlate with an increase in the production of the sector. This can be explained by the fact that agricultural development not only requires provision of roads but additional complementary infrastructure and investments intended to provide the necessary conditions so that producers can offer quality products (improved management practices, timely maintenance of irrigation infrastructure, transparent management of water rights, among other factors). Therefore, complementary public goods are needed to enhance the effects of roads on the welfare of the population, beyond enabling them to increase their access to markets.Keywords: agriculture devolepment, market access, road connectivity, regional development
Procedia PDF Downloads 2051120 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 2051119 Nano-Immunoassay for Diagnosis of Active Schistosomal Infection
Authors: Manal M. Kame, Hanan G. El-Baz, Zeinab A.Demerdash, Engy M. Abd El-Moneem, Mohamed A. Hendawy, Ibrahim R. Bayoumi
Abstract:
There is a constant need to improve the performance of current diagnostic assays of schistosomiasis as well as develop innovative testing strategies to meet new testing challenges. This study aims at increasing the diagnostic efficiency of monoclonal antibody (MAb)-based antigen detection assays through gold nanoparticles conjugated with specific anti-Schistosoma mansoni monoclonal antibodies. In this study, several hybidoma cell lines secreting MAbs against adult worm tegumental Schistosoma antigen (AWTA) were produced at Immunology Department of Theodor Bilharz Research Institute and preserved in liquid nitrogen. One MAb (6D/6F) was chosen for this study due to its high reactivity to schistosome antigens with highest optical density (OD) values. Gold nanoparticles (AuNPs) were functionalized and conjugated with MAb (6D/6F). The study was conducted on serum samples of 116 subjects: 71 patients with S. mansoni eggs in their stool samples group (gp 1), 25 with other parasites (gp2) and 20 negative healthy controls (gp3). Patients in gp1 were further subdivided according to egg count in their stool samples into Light infection {≤ 50 egg per gram(epg) (n= 17)}, moderate {51-100 epg (n= 33)} and severe infection {>100 epg(n= 21)}. Sandwich ELISA was performed using (AuNPs -MAb) for detection of circulating schistosomal antigen (CSA) levels in serum samples of all groups and the results were compared with that after using MAb/ sandwich ELISA system. Results Gold- MAb/ ELISA system reached a lower detection limit of 10 ng/ml compared to 85 ng/ml on using MAb/ ELISA and the optimal concentrations of AuNPs -MAb were found to be 12 folds less than that of MAb/ ELISA system for detection of CSA. The sensitivity and specificity of sandwich ELISA for detection of CSA levels using AuNPs -MAb were 100% & 97.8 % respectively compared to 87.3% &93.38% respectively on using MAb/ ELISA system. It was found that CSA was detected in 9 out of 71 S.mansoni infected patients on using AuNPs - MAb/ ELISA system and was not detected by MAb/ ELISA system. All those patients (9) was found to have an egg count below 50 epg feces (patients with light infections). ROC curve analyses revealed that sandwich ELISA using gold-MAb was an excellent diagnostic investigator that could differentiate Schistosoma patients from healthy controls, on the other hand it revealed that sandwich ELISA using MAb was not accurate enough as it could not recognize nine out of 71 patients with light infections. Conclusion Our data demonstrated that: Loading gold nanoparticles with MAb (6D/6F) increases the sensitivity and specificity of sandwich ELISA for detection of CSA, thus active (early) and light infections could be easily detected. Moreover this binding will decrease the amount of MAb consumed in the assay and lower the coast. The significant positive correlation that was detected between ova count (intensity of infection) and OD reading in sandwich ELISA using gold- MAb enables its use to detect the severity of infections and follow up patients after treatment for monitoring of cure.Keywords: Schistosomiasis, nanoparticles, gold, monoclonal antibodies, ELISA
Procedia PDF Downloads 3711118 Loss of the Skin Barrier after Dermal Application of the Low Molecular Methyl Siloxanes: Volatile Methyl Siloxanes, VMS Silicones
Authors: D. Glamowska, K. Szymkowska, K. Mojsiewicz- Pieńkowska, K. Cal, Z. Jankowski
Abstract:
Introduction: The integrity of the outermost layer of skin (stratum corneum) is vital to the penetration of various compounds, including toxic substances. Barrier function of skin depends of its structure. The barrier function of the stratum corneum is provided by patterned lipid lamellae (binlayer). However, a lot of substances, including the low molecular methyl siloxanes (volatile methyl siloxanes, VMS) have an impact on alteration the skin barrier due to damage of stratum corneum structure. VMS belong to silicones. They are widely used in the pharmaceutical as well as cosmetic industry. Silicones fulfill the role of ingredient or excipient in medicinal products and the excipient in personal care products. Due to the significant human exposure to this group of compounds, an important aspect is toxicology of the compounds and safety assessment of products. Silicones in general opinion are considered as a non-toxic substances, but there are some data about their negative effect on living organisms through the inhaled or oral application. However, the transdermal route has not been described in the literature as a possible alternative route of penetration. The aim of the study was to verify the possibility of penetration of the stratum corneum, further permeation into the deeper layers of the skin (epidermis and dermis) as well as to the fluid acceptor by VMS. Methods: Research methodology was developed based on the OECD and WHO guidelines. In ex-vivo study, the fluorescence microscope and ATR FT-IR spectroscopy was used. The Franz- type diffusion cells were used to application of the VMS on the sample of human skin (A=0.65 cm) for 24h. The stratum corneum at the application site was tape-stripped. After separation of epidermis, relevant dyes: fluorescein, sulforhodamine B, rhodamine B hexyl ester were put on and observations were carried in the microscope. To confirm the penetration and permeation of the cyclic or linear VMS and thus the presence of silicone in the individual layers of the skin, spectra ATR FT-IR of the sample after application of silicone and H2O (control sample) were recorded. The research included comparison of the intesity of bands in characteristic positions for silicones (1263 cm-1, 1052 cm-1 and 800 cm-1). Results: and Conclusions The results present that cyclic and linear VMS are able to overcome the barrier of the skin. Influence of them on damage of corneocytes of the stratum corneum was observed. This phenomenon was due to distinct disturbances in the lipid structure of the stratum corneum. The presence of cyclic and linear VMS were identified in the stratum corneum, epidermis as well as in the dermis by both fluorescence microscope and ATR FT-IR spectroscopy. This confirms that the cyclic and linear VMS can penetrate to stratum corneum and permeate through the human skin layers. Apart from this they cause changes in the structure of the skin. Results show to possible absorption into the blood and lymphathic vessels by the VMS with linear and cyclic structure.Keywords: low molecular methyl siloxanes, volatile methyl siloxanes, linear and cyclic siloxanes, skin penetration, skin permeation
Procedia PDF Downloads 3441117 Optimizing the Pair Carbon Xerogels-Electrolyte for High Performance Supercapacitors
Authors: Boriana Karamanova, Svetlana Veleva, Luybomir Soserov, Ana Arenillas, Francesco Lufrano, Antonia Stoyanova
Abstract:
Supercapacitors have received a lot of research attention and are promising energy storage devices due to their high power and long cycle life. In order to developed an advanced device with significant capacity for storing charge and cheap carbon materials, efforts must focus not only on improving synthesis by controlling the morphology and pore size but also on improving electrode-electrolyte compatibility of the resulting systems. The present study examines the relationship between the surface chemistry of two activated carbon xerogels, the electrolyte type, and the electrochemical properties of supercapacitors. Activated carbon xerogels were prepared by varying the initial pH of the resorcinol-formaldehyde aqueous solution. The materials produced are physicochemical characterized by DTA/TGA, porous characterization, and SEM analysis. The carbon xerogel based electrodes were prepared by spreading over glass plate a slurry containing the carbon gel, graphite, and poly vinylidene difluoride (PVDF) binder. The layer formed was dried consecutively at different temperatures and then detached by water. After, the layer was dried again to improve its mechanical stability. The developed electrode materials and the Aquivion® E87-05S membrane (Solvay Specialty Polymers), socked in Na2SO4 as a polymer electrolyte, were used to assembly the solid-state supercapacitor. Symmetric supercapacitor cells composed by same electrodes and 1 M KOH electrolytes are also assembled and tested for comparison. The supercapacitor performances are verified by different electrochemical methods - cyclic voltammetry, galvanostatic charge/discharge measurements, electrochemical impedance spectroscopy, and long-term durability tests in neutral and alkaline electrolytes. Specific capacitances, energy, and power density, energy efficiencies, and durability were compared into studied supercapacitors. Ex-situ physicochemical analyses on the synthesized materials have also been performed, which provide information about chemical and structural changes in the electrode morphology during charge / discharge durability tests. They are discussed on the basis of electrode-electrolyte interaction. The obtained correlations could be of significance in order to design sustainable solid-state supercapacitors with high power and energy density. Acknowledgement: This research is funded by the Ministry of Education and Science of Bulgaria under the National Program "European Scientific Networks" (Agreement D01-286 / 07.10.2020, D01-78/30.03.2021). Authors gratefully acknowledge.Keywords: carbon xerogel, electrochemical tests, neutral and alkaline electrolytes, supercapacitors
Procedia PDF Downloads 1361116 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 481115 Comparing the Knee Kinetics and Kinematics during Non-Steady Movements in Recovered Anterior Cruciate Ligament Injured Badminton Players against an Uninjured Cohort: Case-Control Study
Authors: Anuj Pathare, Aleksandra Birn-Jeffery
Abstract:
Background: The Anterior Cruciate Ligament(ACL) helps stabilize the knee joint minimizing tibial anterior translation. Anterior Cruciate Ligament (ACL) injury is common in racquet sports and often occurs due to sudden acceleration, deceleration or changes of direction. This mechanism in badminton most commonly occurs during landing after an overhead stroke. Knee biomechanics during dynamic movements such as walking, running and stair negotiation, do not return to normal for more than a year after an ACL reconstruction. This change in the biomechanics may lead to re-injury whilst performing non-steady movements during sports, where these injuries are most prevalent. Aims: To compare if the knee kinetics and kinematics in ACL injury recovered athletes return to the same level as those from an uninjured cohort during standard movements used for clinical assessment and badminton shots. Objectives: The objectives of the study were to determine: Knee valgus during the single leg squat, vertical drop jump, net shot and drop shot; Degree of internal or external rotation during the single leg squat, vertical drop jump, net shot and drop shot; Maximum knee flexion during the single leg squat, vertical drop jump and net shot. Methods: This case-control study included 14 participants with three ACL injury recovered athletes and 11 uninjured participants. The participants performed various functional tasks including vertical drop jump, single leg squat; the forehand net shot and the forehand drop shot. The data was analysed using the two-way ANOVA test, and the reliability of the data was evaluated using the Intra Class Coefficient. Results: The data showed a significant decrease in the range of knee rotation in ACL injured participants as compared to the uninjured cohort (F₇,₅₅₆=2.37; p=0.021). There was also a decrease in the maximum knee flexion angles and an increase in knee valgus angles in ACL injured participants although they were not statistically significant. Conclusion: There was a significant decrease in the knee rotation angles in the ACL injured participants which could be a potential cause for re-injury in these athletes in the future. Although the results for decrease in maximum knee flexion angles and increase in knee valgus angles were not significant, this may be due to a limited sample of ACL injured participants; there is potential for it to be identified as a variable of interest in the rehabilitation of ACL injuries. These changes in the knee biomechanics could be vital in the rehabilitation of ACL injured athletes in the future, and an inclusion of sports based tasks, e.g., Net shot along with standard protocol movements for ACL assessment would provide a better measure of the rehabilitation of the athlete.Keywords: ACL, biomechanics, knee injury, racquet sport
Procedia PDF Downloads 1741114 Comparison between the Quadratic and the Cubic Linked Interpolation on the Mindlin Plate Four-Node Quadrilateral Finite Elements
Authors: Dragan Ribarić
Abstract:
We employ the so-called problem-dependent linked interpolation concept to develop two cubic 4-node quadrilateral Mindlin plate finite elements with 12 external degrees of freedom. In the problem-independent linked interpolation, the interpolation functions are independent of any problem material parameters and the rotation fields are not expressed in terms of the nodal displacement parameters. On the contrary, in the problem-dependent linked interpolation, the interpolation functions depend on the material parameters and the rotation fields are expressed in terms of the nodal displacement parameters. Two cubic 4-node quadrilateral plate elements are presented, named Q4-U3 and Q4-U3R5. The first one is modelled with one displacement and two rotation degrees of freedom in every of the four element nodes and the second element has five additional internal degrees of freedom to get polynomial completeness of the cubic form and which can be statically condensed within the element. Both elements are able to pass the constant-bending patch test exactly as well as the non-zero constant-shear patch test on the oriented regular mesh geometry in the case of cylindrical bending. In any mesh shape, the elements have the correct rank and only the three eigenvalues, corresponding to the solid body motions are zero. There are no additional spurious zero modes responsible for instability of the finite element models. In comparison with the problem-independent cubic linked interpolation implemented in Q9-U3, the nine-node plate element, significantly less degrees of freedom are employed in the model while retaining the interpolation conformity between adjacent elements. The presented elements are also compared to the existing problem-independent quadratic linked-interpolation element Q4-U2 and to the other known elements that also use the quadratic or the cubic linked interpolation, by testing them on several benchmark examples. Simple functional upgrading from the quadratic to the cubic linked interpolation, implemented in Q4-U3 element, showed no significant improvement compared to the quadratic linked form of the Q4-U2 element. Only when the additional bubble terms are incorporated in the displacement and rotation function fields, which complete the full cubic linked interpolation form, qualitative improvement is fulfilled in the Q4-U3R5 element. Nevertheless, the locking problem exists even for the both presented elements, like in all pure displacement elements when applied to very thin plates modelled by coarse meshes. But good and even slightly better performance can be noticed for the Q4-U3R5 element when compared with elements from the literature, if the model meshes are moderately dense and the plate thickness not extremely thin. In some cases, it is comparable to or even better than Q9-U3 element which has as many as 12 more external degrees of freedom. A significant improvement can be noticed in particular when modeling very skew plates and models with singularities in the stress fields as well as circular plates with distorted meshes.Keywords: Mindlin plate theory, problem-independent linked interpolation, problem-dependent interpolation, quadrilateral displacement-based plate finite elements
Procedia PDF Downloads 3121113 Exploring the Benefits of Hiring Individuals with Disabilities in the Workplace
Authors: Rosilyn Sanders
Abstract:
This qualitative study examined the impact of hiring people with intellectual disabilities (ID). The research questions were: What defines a disability? What accommodations are needed to ensure the success of a person with a disability? As a leader, what benefits do people with intellectual disabilities bring to the organization? What are the benefits of hiring people with intellectual disabilities in retail organizations? Moreover, how might people with intellectual disabilities contribute to the organizational culture of retail organizations? A narrative strength approach was used as a theoretical framework to guide the discussion and uncover the benefits of hiring individuals with intellectual disabilities in various retail organizations. Using qualitative interviews, the following themes emerged: diversity and inclusion, accommodations, organizational culture, motivation, and customer service. These findings put to rest some negative stereotypes and perceptions of persons with ID as being unemployable or unable to perform tasks when employed, showing instead that persons with ID can work efficiently when given necessary work accommodations and support in an enabling organizational culture. All participants were recruited and selected through various forms of electronic communication via social media, email invitations, and phone; this was conducted through the methodology of snowball sampling with the following demographics: age, ethnicity, gender, number of years in retail, number of years in management, and number of direct reports. The sample population was employed in several retail organizations throughout Arkansas and Texas. The small sample size for qualitative research in this study helped the researcher develop, build, and maintain close relationships that encouraged participants to be forthcoming and honest with information (Clow & James, 2014 ). Participants were screened to ensure they met the researcher's study; and screened to ensure that they were over 18 years of age. Participants were asked if they recruit, interview, hire, and supervise individuals with intellectual disabilities. Individuals were given consent forms via email to indicate their interest in participating in this study. Due to COVID-19, all interviews were conducted via teleconferencing (Zoom or Microsoft Teams) that lasted approximately 1 hour, which were transcribed, coded for themes, and grouped based on similar responses. Further, the participants were not privy to the interview questions beforehand, and demographic questions were asked at the end, including questions concerning age, education level, and job status. Each participant was assigned random numbers using an app called ‘The Random Number Generator ‘to ensure that all personal or identifying information of participants were removed. Regarding data storage, all documentation was stored on a password-protected external drive, inclusive of consent forms, recordings, transcripts, and researcher notes.Keywords: diversity, positive psychology, organizational development, leadership
Procedia PDF Downloads 671112 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire
Authors: Vinay A. Sharma, Shiva Prasad H. C.
Abstract:
The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.Keywords: decision-making, leadership, logic, strategic management
Procedia PDF Downloads 1081111 Collaborative Environmental Management: A Case Study Research of Stakeholders' Collaboration in the Nigerian Oil-Producing Region
Authors: Favour Makuochukwu Orji, Yingkui Zhao
Abstract:
A myriad of environmental issues face the Nigerian industrial region, resulting from; oil and gas production, mining, manufacturing and domestic wastes. Amidst these, much effort has been directed by stakeholders in the Nigerian oil producing regions, because of the impacts of the region on the wider Nigerian economy. Research to date has suggested that collaborative environmental management could be an effective approach in managing environmental issues; but little attention has been given to the roles and practices of stakeholders in effecting a collaborative environmental management framework for the Nigerian oil-producing region. This paper produces a framework to expand and deepen knowledge relating to stakeholders aspects of collaborative roles in managing environmental issues in the Nigeria oil-producing region. The knowledge is derived from analysis of stakeholders’ practices – studied through multiple case studies using document analysis. Selected documents of key stakeholders – Nigerian government agencies, multi-national oil companies and host communities, were analyzed. Open and selective coding was employed manually during document analysis of data collected from the offices and websites of the stakeholders. The findings showed that the stakeholders have a range of roles, practices, interests, drivers and barriers regarding their collaborative roles in managing environmental issues. While they have interests for efficient resource use, compliance to standards, sharing of responsibilities, generating of new solutions, and shared objectives; there is evidence of major barriers which includes resource allocation, disjointed policy and regulation, ineffective monitoring, diverse socio- economic interests, lack of stakeholders’ commitment and limited knowledge sharing. However, host communities hold deep concerns over the collaborative roles of stakeholders for economic interests, particularly, where government agencies and multi-national oil companies are involved. With these barriers and concerns, a genuine stakeholders’ collaboration is found to be limited, and as a result, optimal environmental management practices and policies have not been successfully implemented in the Nigeria oil-producing region. A framework is produced that describes practices that characterize collaborative environmental management might be employed to satisfy the stakeholders’ interests. The framework recommends critical factors, based on the findings, which may guide a collaborative environmental management in the oil producing regions. The recommendations are designed to re-define the practices of stakeholders in managing environmental issues in the oil producing regions, not as something wholly new, but as an approach essential for implementing a sustainable environmental policy. This research outcome may clarify areas for future research as well as to contribute to industry guidance in the area of collaborative environmental management.Keywords: collaborative environmental management framework, case studies, document analysis, multinational oil companies, Nigerian oil producing regions, Nigerian government agencies, stakeholders analysis
Procedia PDF Downloads 1741110 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)
Authors: Vinay Kumar Vanjakula, Frank Adam
Abstract:
The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour
Procedia PDF Downloads 1691109 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques
Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola
Abstract:
Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2
Procedia PDF Downloads 1371108 Development of International Entry-Level Nursing Competencies to Address the Continuum of Substance Use
Authors: Cheyenne Johnson, Samantha Robinson, Christina Chant, Ann M. Mitchell, Carol Price, Carmel Clancy, Adam Searby, Deborah S. Finnell
Abstract:
Introduction: Substance use along the continuum from at-risk use to a substance use disorder (SUD) contributes substantially to the burden of disease and related harms worldwide. There is a growing body of literature that highlights the lack of substance use related content in nursing curricula. Furthermore, there is also a lack of consensus on key competencies necessary for entry-level nurses. Globally, there is a lack of established nursing competencies related to prevention, health promotion, harm reduction and treatment of at-risk substance use and SUDs. At a critical time in public health, this gap in nursing curricula contributes to a lack of preparation for entry-level nurses to support people along the continuum of substance use. Thus, in practice, early opportunities for screening, support, and interventions may be missed. To address this gap, an international committee was convened to develop international entry-level nursing competencies specifying the knowledge, skills, and abilities that all nurses should possess in order to address the continuum of substance use. Methodology: An international steering committee, including representation from Canada, United States, United Kingdom, and Australia was established to lead this work over a one-year time period. The steering committee conducted a scoping review, undertaken to examine nursing competency frameworks, and to inform a competency structure that would guide this work. The next steps were to outline key competency areas and establish leaders for working groups to develop the competencies. In addition, a larger international committee was gathered to contribute to competency working groups, review the collective work and concur on the final document. Findings: A comprehensive framework was developed with competencies covering a wide spectrum of substance use across the lifespan and in the context of prevention, health promotion, harm reduction and treatment, including special populations. The development of this competency-based framework meets an identified need to provide guidance for universities, health authorities, policy makers, nursing regulators and other organizations that provide and support nursing education which focuses on care for patients and families with at-risk substance use and SUDs. Conclusion: Utilizing these global competencies as expected outcomes of an educational and skill building curricula for entry-level nurses holds great promise for incorporating evidence-informed training in the care and management of people across the continuum of substance use.Keywords: addiction nursing, addiction nursing curriculum, competencies, substance use
Procedia PDF Downloads 1751107 Erasmus+ Program in Vocational Education: Effects of European International Mobility in Portuguese Vocational Schools
Authors: José Carlos Bronze, Carlinda Leite, Angélica Monteiro
Abstract:
The creation of the Erasmus Program in 1987 represented a milestone in promoting and funding international mobility in higher education in Europe. Its effects were so significant that they influenced the creation of the European Higher Education Area through the Bologna Process and ensured the program’s continuation and maintenance. Over the last decades, the escalating figures of participants and funds instigated significant scientific studies on the program's effects on higher education. More recently, in 2014, the program was renamed “Erasmus+” when it expanded into other fields of education, namely Vocational Education and Training (VET). Despite being now running in this field of education for a decade (2014-2024), its effects on VET remain less studied and less known, while the higher education field keeps attracting researchers’ attention. Given this gap, it becomes relevant to study the effects of E+ on VET, particularly in the priority domains of the Program: “Inclusion and Diversity,” “Participation in Democratic Life, Common Values and Civic Engagement,” “Environment and Fight Against Climate Change,” and “Digital Transformation.” This latter has been recently emphasized due to the COVID-19 pandemic that forced the so-called emergency remote teaching, leading schools to quickly transform and adapt to a new reality regardless of the preparedness levels of teachers and students. Together with the remaining E+ priorities, they directly relate to an emancipatory perspective of education sustained in soft skills such as critical thinking, intercultural awareness, autonomy, active citizenship, teamwork, and problem-solving, among others. Based on this situation, it is relevant to know the effects of E+ on the VET field, namely questioning how international mobility instigates digitalization processes and supports emancipatory queries therein. As an education field that more directly connects to hard skills and an instrumental approach oriented to the labor market’s needs, a study was conducted to determine the effects of international mobility on developing digital literacy and soft skills in the VET field. In methodological terms, the study used semi-structured interviews with teaching and non-teaching staff from three VET schools who are strongly active in the E+ Program. The interviewees were three headmasters, four mobility project managers, and eight teachers experienced in international mobility. The data was subjected to qualitative content analysis using the NVivo 14 application. The results show that E+ international mobility promotes and facilitates the use of digital technologies as a pedagogical resource at VET schools and enhances and generates students’ soft skills. In conclusion, E+ mobility in the VET field supports adopting the program’s priorities by increasing the teachers’ knowledge and use of digital resources and amplifying and generating participants’ soft skills.Keywords: Erasmus international mobility, digital literacy, soft skills, vocational education and training
Procedia PDF Downloads 321106 Developing a Roadmap by Integrating of Environmental Indicators with the Nitrogen Footprint in an Agriculture Region, Hualien, Taiwan
Authors: Ming-Chien Su, Yi-Zih Chen, Nien-Hsin Kao, Hideaki Shibata
Abstract:
The major component of the atmosphere is nitrogen, yet atmospheric nitrogen has limited availability for biological use. Human activities have produced different types of nitrogen related compounds such as nitrogen oxides from combustion, nitrogen fertilizers from farming, and the nitrogen compounds from waste and wastewater, all of which have impacted the environment. Many studies have indicated the N-footprint is dominated by food, followed by housing, transportation, and goods and services sectors. To solve the impact issues from agricultural land, nitrogen cycle research is one of the key solutions. The study site is located in Hualien County, Taiwan, a major rice and food production area of Taiwan. Importantly, environmentally friendly farming has been promoted for years, and an environmental indicator system has been established by previous authors based on the concept of resilience capacity index (RCI) and environmental performance index (EPI). Nitrogen management is required for food production, as excess N causes environmental pollution. Therefore it is very important to develop a roadmap of the nitrogen footprint, and to integrate it with environmental indicators. The key focus of the study thus addresses (1) understanding the environmental impact caused by the nitrogen cycle of food products and (2) uncovering the trend of the N-footprint of agricultural products in Hualien, Taiwan. The N-footprint model was applied, which included both crops and energy consumption in the area. All data were adapted from government statistics databases and crosschecked for consistency before modeling. The actions involved with agricultural production were evaluated and analyzed for nitrogen loss to the environment, as well as measuring the impacts to humans and the environment. The results showed that rice makes up the largest share of agricultural production by weight, at 80%. The dominant meat production is pork (52%) and poultry (40%); fish and seafood were at similar levels to pork production. The average per capita food consumption in Taiwan is 2643.38 kcal capita−1 d−1, primarily from rice (430.58 kcal), meats (184.93 kcal) and wheat (ca. 356.44 kcal). The average protein uptake is 87.34 g capita−1 d−1, and 51% is mainly from meat, milk, and eggs. The preliminary results showed that the nitrogen footprint of food production is 34 kg N per capita per year, congruent with the results of Shibata et al. (2014) for Japan. These results provide a better understanding of the nitrogen demand and loss in the environment, and the roadmap can furthermore support the establishment of nitrogen policy and strategy. Additionally, the results serve to develop a roadmap of the nitrogen cycle of an environmentally friendly farming area, thus illuminating the nitrogen demand and loss of such areas.Keywords: agriculture productions, energy consumption, environmental indicator, nitrogen footprint
Procedia PDF Downloads 3021105 Attachment Theory and Quality of Life: Grief Education and Training
Authors: Jane E. Hill
Abstract:
Quality of life is an important component for many. With that in mind, everyone will experience some type of loss within his or her lifetime. A person can experience loss due to break up, separation, divorce, estrangement, or death. An individual may experience loss of a job, loss of capacity, or loss caused by human or natural-caused disasters. An individual’s response to such a loss is unique to them, and not everyone will seek services to assist them with their grief due to loss. Counseling can promote positive outcomes for clients that are grieving by addressing the client’s personal loss and helping the client process their grief. However, a lack of understanding on the part of counselors of how people grieve may result in negative client outcomes such as poor health, psychological distress, or an increased risk of depression. Education and training in grief counseling can improve counselors’ problem recognition and skills in treatment planning. The purpose of this study was to examine whether the Council for Accreditation of Counseling and Related Educational Programs (CACREP) master’s degree counseling students view themselves as having been adequately trained in grief theories and skills. Many people deal with grief issues that prevent them from having joy or purpose in their lives and that leaves them unable to engage in positive opportunities or relationships. This study examined CACREP-accredited master’s counseling students’ self-reported competency, training, and education in providing grief counseling. The implications for positive social change arising from the research may be to incorporate and promote education and training in grief theories and skills in a majority of counseling programs and to provide motivation to incorporate professional standards for grief training and practice in the mental health counseling field. The theoretical foundation used was modern grief theory based on John Bowlby’s work on Attachment Theory. The overall research question was how competent do master’s-level counselors view themselves regarding the education or training they received in grief theories or counseling skills in their CACREP-accredited studies. The author used a non-experimental, one shot survey comparative quantitative research design. Cicchetti’s Grief Counseling Competency Scale (GCCS) was administered to CACREP master’s-level counseling students enrolled in their practicum or internship experience, which resulted in 153 participants. Using a MANCOVA, there was significance found for relationships between coursework taken and (a) perceived assessment skills (p = .029), (b) perceived treatment skills (p = .025), and (c) perceived conceptual skills and knowledge (p = .003). Results of this study provided insight for CACREP master’s-level counseling programs to explore and discuss curriculum coursework inclusion of education and training in grief theories and skills.Keywords: counselor education and training, grief education and training, grief and loss, quality of life
Procedia PDF Downloads 1911104 Energy Strategies for Long-Term Development in Kenya
Authors: Joseph Ndegwa
Abstract:
Changes are required if energy systems are to foster long-term growth. The main problems are increasing access to inexpensive, dependable, and sufficient energy supply while addressing environmental implications at all levels. Policies can help to promote sustainable development by providing adequate and inexpensive energy sources to underserved regions, such as liquid and gaseous fuels for cooking and electricity for household and commercial usage. Promoting energy efficiency. Increased utilization of new renewables. Spreading and implementing additional innovative energy technologies. Markets can achieve many of these goals with the correct policies, pricing, and regulations. However, if markets do not work or fail to preserve key public benefits, tailored government policies, programs, and regulations can achieve policy goals. The main strategies for promoting sustainable energy systems are simple. However, they need a broader recognition of the difficulties we confront, as well as a firmer commitment to specific measures. Making markets operate better by minimizing pricing distortions, boosting competition, and removing obstacles to energy efficiency are among the measures. Complementing the reform of the energy industry with policies that promote sustainable energy. Increasing investments in renewable energy. Increasing the rate of technical innovation at each level of the energy innovation chain. Fostering technical leadership in underdeveloped nations by transferring technology and enhancing institutional and human capabilities. promoting more international collaboration. Governments, international organizations, multilateral financial institutions, and civil society—including local communities, business and industry, non-governmental organizations (NGOs), and consumers—all have critical enabling roles to play in the problem of sustainable energy. Partnerships based on integrated and cooperative approaches and drawing on real-world experience will be necessary. Setting the required framework conditions and ensuring that public institutions collaborate effectively and efficiently with the rest of society are common themes across all industries and geographical areas in order to achieve sustainable development. Powerful tools for sustainable development include energy. However, significant policy adjustments within the larger enabling framework will be necessary to refocus its influence in order to achieve that aim. Many of the options currently accessible will be lost or the price of their ultimate realization (where viable) will grow significantly if such changes don't take place during the next several decades and aren't started right enough. In any case, it would seriously impair the capacity of future generations to satisfy their demands.Keywords: sustainable development, reliable, price, policy
Procedia PDF Downloads 651103 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull
Authors: Elsher Lawson-Boyd
Abstract:
In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.Keywords: genetics, mental illness, neuroepigenetics, trauma
Procedia PDF Downloads 1251102 Passivization: as Syntactic Argument Decreasing Parameter in Boro
Authors: Ganga Brahma
Abstract:
Boro employs verbs hooked up with morphemes which lead verbs to adjust with their arguments and hence, affecting the whole of sentence structures. This paper is based on few such syntactic parameters which are usually considered as argument decreasing parameters in linguistic works. Passivizing of few transitive clauses which are usually construed from the verbs occurring with certain morphemes and representation in middle constructions are few of such strategies which lead to conceptualizing of decreasing of syntactic arguments from a sentence. This paper focuses on the mentioned linguistic strategies and attempts to describe the linguistic processes as for how these parameters work in languages especially by concentrating on a particular Tibeto-Burman language i.e. Boro. Boro is a Tibeto-Burman language widely spoken in parts of the north-eastern regions of India. It has an agglutinative nature in forming words as well as clauses. There is a morpheme ‘za’ which means ‘to happen, become’ in Boro whose appearances with verb roots denotes an idea of the subject being passivized. Passivization, usually has notions that it is a reversed representation of its active sentence forms in the terms of argument placements. (However, it is not accountably true as passives and actives have some distinct features of their own and independent of one and the other.) This particular work will concentrate on the semantics of passivization at the same time along with its syntactic reality. The verb khɑo meaning ‘to steal’ offers a sense of passivization with the appearance of the morpheme zɑ which means ‘to happen, become’ (e.g Zunu-ɑ lama-ɑo phɯisɑ khɑo-zɑ-bɑi; Junu-NOM road-LOC money steal-PASS-PRES: Junu got her money stolen on the road). The focus, here, is more on the argument placed at the subject position (i.e. Zunu) and the event taken place. The semantics of such construction asks for the agent because without an agent the event could not have taken place. However, the syntactic elements fill the slots of relegated or temporarily deleted agent which, infact, is the actual subject cum agent in its active representation. Due to the event marker ‘zɑ’ in this presentation it affords to reduce one participant from such a situation which in actual is made up of three participants. Hence, the structure of di-transitive construction here reduces to mono-transitive structure. Unlike passivization, middle construction does not allow relegation of the agents. It permanently deletes agents. However, it also focuses on the fore-grounded subject and highlighting on the changed states on the subjects which happens to be the underlying objects of their respective transitive structures (with agents). This work intends to describe how these two parameters which are different at their semantic realization can meet together at a syntactic level in order to create a linguistic parameter that decreases participants from their actual structures which are with more than one participant.Keywords: argument-decrease, middle-construction, passivization, transitivity-intransitivity
Procedia PDF Downloads 2371101 Audit and Assurance Program for AI-Based Technologies
Authors: Beatrice Arthur
Abstract:
The rapid development of artificial intelligence (AI) has transformed various industries, enabling faster and more accurate decision-making processes. However, with these advancements come increased risks, including data privacy issues, systemic biases, and challenges related to transparency and accountability. As AI technologies become more integrated into business processes, there is a growing need for comprehensive auditing and assurance frameworks to manage these risks and ensure ethical use. This paper provides a literature review on AI auditing and assurance programs, highlighting the importance of adapting traditional audit methodologies to the complexities of AI-driven systems. Objective: The objective of this review is to explore current AI audit practices and their role in mitigating risks, ensuring accountability, and fostering trust in AI systems. The study aims to provide a structured framework for developing audit programs tailored to AI technologies while also investigating how AI impacts governance, risk management, and regulatory compliance in various sectors. Methodology: This research synthesizes findings from academic publications and industry reports from 2014 to 2024, focusing on the intersection of AI technologies and IT assurance practices. The study employs a qualitative review of existing audit methodologies and frameworks, particularly the COBIT 2019 framework, to understand how audit processes can be aligned with AI governance and compliance standards. The review also considers real-time auditing as an emerging necessity for influencing AI system design during early development stages. Outcomes: Preliminary findings indicate that while AI auditing is still in its infancy, it is rapidly gaining traction as both a risk management strategy and a potential driver of business innovation. Auditors are increasingly being called upon to develop controls that address the ethical and operational risks posed by AI systems. The study highlights the need for continuous monitoring and adaptable audit techniques to handle the dynamic nature of AI technologies. Future Directions: Future research will explore the development of AI-specific audit tools and real-time auditing capabilities that can keep pace with evolving technologies. There is also a need for cross-industry collaboration to establish universal standards for AI auditing, particularly in high-risk sectors like healthcare and finance. Further work will involve engaging with industry practitioners and policymakers to refine the proposed governance and audit frameworks. Funding/Support Acknowledgements: This research is supported by the Information Systems Assurance Management Program at Concordia University of Edmonton.Keywords: AI auditing, assurance, risk management, governance, COBIT 2019, transparency, accountability, machine learning, compliance
Procedia PDF Downloads 241100 The Role of University in High-Level Human Capital Cultivation in China’s West Greater Bay Area
Authors: Rochelle Yun Ge
Abstract:
University has played an active role in the country’s development in China. There has been an increasing research interest on the development of higher education cooperation, talent cultivation and attraction, and innovation in the regional development. The Triple Helix model, which indicates that regional innovation and development can be engendered by collaboration among university, industry and government, is often adopted as research framework. The research using triple helix model emphasizes the active and often leading role of university in knowledge-based economy. Within this framework, universities are conceptualized as key institutions of knowledge production, transmission and transference potentially making critical contributions to regional development. Recent research almost uniformly consistent in indicating the high-level research labours (i.e., doctoral, post-doctoral researchers and academics) as important actors in the innovation ecosystem with their cross-geographical human capital and resources presented. In 2019, the development of the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) was officially launched as an important strategy by the Chinese government to boost the regional development of the Pearl River Delta and to support the realization of “One Belt One Road” strategy. Human Capital formation is at the center of this plan. One of the strategic goals of the GBA development is set to evolve into an international educational hub and innovation center with high-level talents. A number of policies have been issued to attract and cultivate human resources in different GBA cities, in particular for the high-level R&D (research and development) talents such as doctoral and post-doctoral researchers. To better understand the development of high-level talents hub in the GBA, more empirical considerations should be given to explore the approaches of talents cultivation and attraction in the GBA. What remains to explore is the ways to better attract, train, support and retain these talents in the cross-systems context. This paper aims to investigate the role of university in human capital development under China’s national agenda of GBA integration through the lens of universities and actors. Two flagship comprehensive universities are selected to be the cases and 30 interviews with university officials, research leaders, post-doctors and doctoral candidates are used for analysis. In particular, we look at in what ways have universities aligned their strategies and practices to the Chinese government’s GBA development strategy? What strategies and practices have been developed by universities for the cultivation and attraction of high-level research labor? And what impacts the universities have made for the regional development? The main arguments of this research highlights the specific ways in which universities in smaller sub-regions can collaborate in high-level human capital formation and the role policy can play in facilitating such collaborations.Keywords: university, human capital, regional development, triple-helix model
Procedia PDF Downloads 1131099 Teen Insights into Drugs, Alcohol, and Nicotine: A National Survey of Adolescent Attitudes toward Addictive Substances
Authors: Linda Richter
Abstract:
Background and Significance: The influence of parents on their children’s attitudes and behaviors is immense, even as children grow out of what one might assume to be their most impressionable years and into teenagers. This study specifically examines the potential that parents have to prevent or reduce the risk of adolescent substance use, even in the face of considerable environmental influences to use nicotine, alcohol, or drugs. Methodology: The findings presented are based on a nationally representative survey of 1,014 teens aged 12-17 living in the United States. Data were collected using an online platform in early 2018. About half the sample was female (51%), 49% was aged 12-14, and 51% was aged 15-17. The margin of error was +/- 3.5%. Demographic data on the teens and their families were available through the survey platform. Survey items explored adolescent respondents’ exposure to addictive substances; the extent to which their sources of information about these substances are reliable or credible; friends’ and peers’ substance use; their own intentions to try substances in the future; and their relationship with their parents. Key Findings: Exposure to nicotine, alcohol, or other drugs and misinformation about these substances were associated with a greater likelihood that adolescents have friends who use drugs and that they have intentions to try substances in the future, which are known to directly predict actual teen substance use. In addition, teens who reported a positive relationship with their parents and having parents who are involved in their lives had a lower likelihood of having friends who use drugs and of having intentions to try substances in the future. This relationship appears to be mediated by parents’ ability to reduce the extent to which their children are exposed to substances in their environment and to misinformation about them. Indeed, the findings indicated that teens who reported a good relationship with their parents and those who reported higher levels of parental monitoring had significantly higher odds of reporting a lower number of risk factors than teens with a less positive relationship with parents or less monitoring. There also were significantly greater risk factors associated with substance use among older teens relative to younger teens. This shift appears to coincide directly with the tendency of parents to pull back in their monitoring and their involvement in their adolescent children’s lives. Conclusion: The survey findings underscore the importance of resisting the urge to completely pull back as teens age and demand more independence since that is exactly when the risks for teen substance use spike and young people need their parents and other trusted adults to be involved more than ever. Particularly through the cultivation of a healthy, positive, and open relationship, parents can help teens receive accurate and credible information about substance use and also monitor their whereabouts and exposure to addictive substances. These findings, which come directly from teens themselves, demonstrate the importance of continued parental engagement throughout children’s lives, regardless of their age and the disincentives to remaining involved and connected.Keywords: adolescent, parental monitoring, prevention, substance use
Procedia PDF Downloads 1461098 The Display of Environmental Information to Promote Energy Saving Practices: Evidence from a Massive Behavioral Platform
Authors: T. Lazzarini, M. Imbiki, P. E. Sutter, G. Borragan
Abstract:
While several strategies, such as the development of more efficient appliances, the financing of insulation programs or the rolling out of smart meters represent promising tools to reduce future energy consumption, their implementation relies on people’s decisions-actions. Likewise, engaging with consumers to reshape their behavior has shown to be another important way to reduce energy usage. For these reasons, integrating the human factor in the energy transition has become a major objective for researchers and policymakers. Digital education programs based on tangible and gamified user interfaces have become a new tool with potential effects to reduce energy consumption4. The B2020 program, developed by the firm “Économie d’Énergie SAS”, proposes a digital platform to encourage pro-environmental behavior change among employees and citizens. The platform integrates 160 eco-behaviors to help saving energy and water and reducing waste and CO2 emissions. A total of 13,146 citizens have used the tool so far to declare the range of eco-behaviors they adopt in their daily lives. The present work seeks to build on this database to identify the potential impact of adopted energy-saving behaviors (n=62) to reduce the use of energy in buildings. To this end, behaviors were classified into three categories regarding the nature of its implementation (Eco-habits: e.g., turning-off the light, Eco-actions: e.g., installing low carbon technology such as led light-bulbs and Home-Refurbishments: e.g., such as wall-insulation or double-glazed energy efficient windows). General Linear Models (GLM) disclosed the existence of a significantly higher frequency of Eco-habits when compared to the number of home-refurbishments realized by the platform users. While this might be explained in part by the high financial costs that are associated with home renovation works, it also contrasts with the up to three times larger energy-savings that can be accomplished by these means. Furthermore, multiple regression models failed to disclose the expected relationship between energy-savings and frequency of adopted eco behaviors, suggesting that energy-related practices are not necessarily driven by the correspondent energy-savings. Finally, our results also suggested that people adopting more Eco-habits and Eco-actions were more likely to engage in Home-Refurbishments. Altogether, these results fit well with a growing body of scientific research, showing that energy-related practices do not necessarily maximize utility, as postulated by traditional economic models, and suggest that other variables might be triggering them. Promoting home refurbishments could benefit from the adoption of complementary energy-saving habits and actions.Keywords: energy-saving behavior, human performance, behavioral change, energy efficiency
Procedia PDF Downloads 2001097 Viability Analysis of a Centralized Hydrogen Generation Plant for Use in Oil Refining Industry
Authors: C. Fúnez Guerra, B. Nieto Calderón, M. Jaén Caparrós, L. Reyes-Bozo, A. Godoy-Faúndez, E. Vyhmeister
Abstract:
The global energy system is experiencing a change of scenery. Unstable energy markets, an increasing focus on climate change and its sustainable development is forcing businesses to pursue new solutions in order to ensure future economic growth. This has led to the interest in using hydrogen as an energy carrier in transportation and industrial applications. As an energy carrier, hydrogen is accessible and holds a high gravimetric energy density. Abundant in hydrocarbons, hydrogen can play an important role in the shift towards low-emission fossil value chains. By combining hydrogen production by natural gas reforming with carbon capture and storage, the overall CO2 emissions are significantly reduced. In addition, the flexibility of hydrogen as an energy storage makes it applicable as a stabilizer in the renewable energy mix. The recent development in hydrogen fuel cells is also raising the expectations for a hydrogen powered transportation sector. Hydrogen value chains exist to a large extent in the industry today. The global hydrogen consumption was approximately 50 million tonnes (7.2 EJ) in 2013, where refineries, ammonia, methanol production and metal processing were main consumers. Natural gas reforming produced 48% of this hydrogen, but without carbon capture and storage (CCS). The total emissions from the production reached 500 million tonnes of CO2, hence alternative production methods with lower emissions will be necessary in future value chains. Hydrogen from electrolysis is used for a wide range of industrial chemical reactions for many years. Possibly, the earliest use was for the production of ammonia-based fertilisers by Norsk Hydro, with a test reactor set up in Notodden, Norway, in 1927. This application also claims one of the world’s largest electrolyser installations, at Sable Chemicals in Zimbabwe. Its array of 28 electrolysers consumes 80 MW per hour, producing around 21,000 Nm3/h of hydrogen. These electrolysers can compete if cheap sources of electricity are available and natural gas for steam reforming is relatively expensive. Because electrolysis of water produces oxygen as a by-product, a system of Autothermal Reforming (ATR) utilizing this oxygen has been analyzed. Replacing the air separation unit with electrolysers produces the required amount of oxygen to the ATR as well as additional hydrogen. The aim of this paper is to evaluate the technical and economic potential of large-scale production of hydrogen for oil refining industry. Sensitivity analysis of parameters such as investment costs, plant operating hours, electricity price and sale price of hydrogen and oxygen are performed.Keywords: autothermal reforming, electrolyser, hydrogen, natural gas, steam methane reforming
Procedia PDF Downloads 2111096 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 801095 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates
Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc
Abstract:
Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS
Procedia PDF Downloads 3571094 Boussinesq Model for Dam-Break Flow Analysis
Authors: Najibullah M, Soumendra Nath Kuiry
Abstract:
Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model
Procedia PDF Downloads 2321093 Reducing the Risk of Alcohol Relapse after Liver-Transplantation
Authors: Rebeca V. Tholen, Elaine Bundy
Abstract:
Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving an LT. Methods: The HRAR Scale is a predictive tool designed to determine the severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients. (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients, and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving a LT. Methods: The HRAR Scale is a predictive tool designed to determine severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients.Keywords: alcoholism, liver transplant, quality improvement, substance abuse
Procedia PDF Downloads 1161092 COVID Prevention and Working Environmental Risk Prevention and Buisness Continuety among the Sme’s in Selected Districts in Sri Lanka
Authors: Champika Amarasinghe
Abstract:
Introduction: Covid 19 pandemic was badly hit to the Sri Lankan economy during the year 2021. More than 65% of the Sri Lankan work force is engaged with small and medium scale businesses which no doubt that they had to struggle for their survival and business continuity during the pandemic. Objective: To assess the association of adherence to the new norms during the Covid 19 pandemic and maintenance of healthy working environmental conditions for business continuity. A cross sectional study was carried out to assess the OSH status and adequacy of Covid 19 preventive strategies among the 200 SME’S in selected two districts in Sri Lanka. These two districts were selected considering the highest availability of SME’s. Sample size was calculated, and probability propionate to size was used to select the SME’s which were registered with the small and medium scale development authority. An interviewer administrated questionnaire was used to collect the data, and OSH risk assessment was carried out by a team of experts to assess the OSH status in these industries. Results: According to the findings, more than 90% of the employees in these industries had a moderate awareness related to COVID 19 disease and preventive strategies such as the importance of Mask use, hand sainting practices, and distance maintenance, but the only forty percent of them were adhered to implementation of these practices. Furthermore, only thirty five percent of the employees and employers in these SME’s new the reasons behind the new norms, which may be the reason for reluctance to implement these strategies and reluctance to adhering to the new norms in this sector. The OSH risk assessment findings revealed that the working environmental organization while maintaining the distance between two employees was poor due to the inadequacy of space in these entities. More than fifty five percent of the SME’s had proper ventilation and lighting facilities. More than eighty five percent of these SME’s had poor electrical safety measures. Furthermore, eighty two percent of them had not maintained fire safety measures. Eighty five percent of them were exposed to heigh noise levels and chemicals where they were not using any personal protectives nor any other engineering controls were not imposed. Floor conditions were poor, and they were not maintaining the occupational accident nor occupational disease diseases. Conclusions: Based on the findings, proper awareness sessions were carried out by NIOSH. Six physical training sessions and continues online trainings were carried out to overcome these issues, which made a drastic change in their working environments and ended up with hundred percent implementation of the Covid 19 preventive strategies, which intern improved the worker participation in the businesses. Reduced absentees and improved business opportunities, and continued their businesses without any interruption during the third episode of Covid 19 in Sri Lanka.Keywords: working environment, Covid 19, occupational diseases, occupational accidents
Procedia PDF Downloads 88