Search results for: hybrid power systems
658 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 119657 Revealing the Nitrogen Reaction Pathway for the Catalytic Oxidative Denitrification of Fuels
Authors: Michael Huber, Maximilian J. Poller, Jens Tochtermann, Wolfgang Korth, Andreas Jess, Jakob Albert
Abstract:
Aside from the desulfurisation, the denitrogenation of fuels is of great importance to minimize the environmental impact of transport emissions. The oxidative reaction pathway of organic nitrogen in the catalytic oxidative denitrogenation could be successfully elucidated. This is the first time such a pathway could be traced in detail in non-microbial systems. It was found that the organic nitrogen is first oxidized to nitrate, which is subsequently reduced to molecular nitrogen via nitrous oxide. Hereby, the organic substrate serves as a reducing agent. The discovery of this pathway is an important milestone for the further development of fuel denitrogenation technologies. The United Nations aims to counteract global warming with Net Zero Emissions (NZE) commitments; however, it is not yet foreseeable when crude oil-based fuels will become obsolete. In 2021, more than 50 million barrels per day (mb/d) were consumed for the transport sector alone. Above all, heteroatoms such as sulfur or nitrogen produce SO₂ and NOx during combustion in the engines, which is not only harmful to the climate but also to health. Therefore, in refineries, these heteroatoms are removed by hy-drotreating to produce clean fuels. However, this catalytic reaction is inhibited by the basic, nitrogenous reactants (e.g., quinoline) as well as by NH3. The ion pair of the nitrogen atom forms strong pi-bonds to the active sites of the hydrotreating catalyst, which dimin-ishes its activity. To maximize the desulfurization and denitrogenation effectiveness in comparison to just extraction and adsorption, selective oxidation is typically combined with either extraction or selective adsorption. The selective oxidation produces more polar compounds that can be removed from the non-polar oil in a separate step. The extraction step can also be carried out in parallel to the oxidation reaction, as a result of in situ separation of the oxidation products (ECODS; extractive catalytic oxidative desulfurization). In this process, H8PV5Mo7O40 (HPA-5) is employed as a homogeneous polyoxometalate (POM) catalyst in an aqueous phase, whereas the sulfur containing fuel components are oxidized after diffusion from the organic fuel phase into the aqueous catalyst phase, to form highly polar products such as H₂SO₄ and carboxylic acids, which are thereby extracted from the organic fuel phase and accumulate in the aqueous phase. In contrast to the inhibiting properties of the basic nitrogen compounds in hydrotreating, the oxidative desulfurization improves with simultaneous denitrification in this system (ECODN; extractive catalytic oxidative denitrogenation). The reaction pathway of ECODS has already been well studied. In contrast, the oxidation of nitrogen compounds in ECODN is not yet well understood and requires more detailed investigations.Keywords: oxidative reaction pathway, denitrogenation of fuels, molecular catalysis, polyoxometalate
Procedia PDF Downloads 180656 Reduction in Hospital Acquire Infections after Intervention of Hand Hygiene and Personal Protective Equipment at COVID Unit Indus Hospital Karachi
Authors: Aisha Maroof
Abstract:
Introduction: Coronavirus Disease 2019 (COVID-19) is spreading rapidly around the world with devastating consequences on patients, health care workers and health systems. Severe 2019 novel coronavirus infectious disease (COVID-19) with pneumonia is associated with high rates of admission to the intensive care unit (ICU) and they are at high risk to obtain the hospital acquire bloodstream infection (HAIs) such as central line associated bloodstream infection (CLABSI), catheter associated urinary tract infections (CAUTI) and laboratory confirm bloodstream infection (LCBSI). The chances of infection transmission increase when healthcare worker’s (HCWs) practice is inappropriate. Risk related to hand hygiene (HH) and personal protective equipment (PPE) as regards multidrug-resistant organism transmission: use of multiple gloving instead of HH and incorrect use of PPE can lead to a significant increase of device-related infections. As it reaches low- and middle-income countries, its effects could be even more, because it will be difficult for them to react aggressively to the pandemic. HAIs are one of the biggest medical concerns, resulting in increased mortality rates. Objective: To assess the effect of intervention on compliance of hand hygiene and PPE among HCWs reduce the rate of HAI in COVID-19 patients. Method: An interventional study was done between July to December, 2020. CLABSI, CAUTI and LCBSI data were collected from the medical record and direct observation. There were total of 50 Nurses, 18 doctors and all patients with laboratory-confirmed severe COVID-19 admitted to the hospital were included in this research study. Respiratory tract specimens were obtained after the first 48 h of ICU admission. Practices were observed after and before intervention. Education was provided based on WHO guidelines. Results: During the six months of study July to December, the rate of CLABSI, CAUTI and LCBSI pre and post intervention was reported. CLABSI rate decreasedd from 22.7 to 0, CAUTI rate was decreased from 1.6 to 0, LCBSI declined from 3.3 to 0 after implementation of intervention. Conclusion: HAIs are an important cause of morbidity and mortality. Most of the device related infections occurs due to lack of correct use of PPE and hand hygiene compliance. Hand hygiene and PPE is the most important measure to protect patients, through education it can be improved the correct use of PPE and hand hygiene compliance and can reduce the bacterial infection in COVID-19 patients.Keywords: hospital acquire infection, healthcare workers, hand hygiene, personal protective equipment
Procedia PDF Downloads 129655 An Assessment of Digital Platforms, Student Online Learning, Teaching Pedagogies, Research and Training at Kenya College of Accounting University
Authors: Jasmine Renner, Alice Njuguna
Abstract:
The booming technological revolution is driving a change in the mode of delivery systems especially for e-learning and distance learning in higher education. The report and findings of the study; an assessment of digital platforms, student online learning, teaching pedagogies, research and training at Kenya College of Accounting University (hereinafter 'KCA') was undertaken as a joint collaboration project between the Carnegie African Diaspora Fellowship and input from the staff, students and faculty at KCA University. The participants in this assessment/research met for selected days during a six-week period during which, one-one consultations, surveys, questionnaires, foci groups, training, and seminars were conducted to ascertain 'online learning and teaching, curriculum development, research and training at KCA.' The project was organized into an eight-week project workflow with each week culminating in project activities designed to assess digital online teaching and learning at KCA. The project also included the training of distance learning instructors at KCA and the evaluation of KCA’s distance platforms and programs. Additionally, through a curriculum audit and redesign, the project sought to enhance the curriculum development activities related to of distance learning at KCA. The findings of this assessment/research represent the systematic deliberate process of gathering, analyzing and using data collected from DL students, DL staff and lecturers and a librarian personnel in charge of online learning resources and access at KCA. We engaged in one-on-one interviews and discussions with staff, students, and faculty and collated the findings to inform practices that are effective in the ongoing design and development of eLearning earning at KCA University. Overall findings of the project led to the following recommendations. First, there is a need to address infrastructural challenges that led to poor internet connectivity for online learning, training needs and content development for faculty and staff. Second, there is a need to manage cultural impediments within KCA; for example fears of vital change from one platform to another for effectiveness and Institutional goodwill as a vital promise of effective online learning. Third, at a practical and short-term level, the following recommendations based on systematic findings of the research conducted were as follows: there is a need for the following to be adopted at KCA University to promote the effective adoption of online learning: a) an eLearning compatible faculty lab, b) revision of policy to include an eLearn strategy or strategic management, c) faculty and staff recognitions engaged in the process of training for the adoption and implementation of eLearning and d) adequate website resources on eLearning. The report and findings represent a comprehensive approach to a systematic assessment of online teaching and learning, research and training at KCA.Keywords: e-learning, digital platforms, student online learning, online teaching pedagogies
Procedia PDF Downloads 191654 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management
Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li
Abstract:
Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification
Procedia PDF Downloads 251653 Experimental Analysis of the Performance of a System for Freezing Fish Products Equipped with a Modulating Vapour Injection Scroll Compressor
Authors: Domenico Panno, Antonino D’amico, Hamed Jafargholi
Abstract:
This paper presents an experimental analysis of the performance of a system for freezing fish products equipped with a modulating vapour injection scroll compressor operating with R448A refrigerant. Freezing is a critical process for the preservation of seafood products, as it influences quality, food safety, and environmental sustainability. The use of a modulating scroll compressor with vapour injection, associated with the R448A refrigerant, is proposed as a solution to optimize the performance of the system, reducing energy consumption and mitigating the environmental impact. The stream injection modulating scroll compressor represents an advanced technology that allows you to adjust the compressor capacity based on the actual cooling needs of the system. Vapour injection allows the optimization of the refrigeration cycle, reducing the evaporation temperature and improving the overall efficiency of the system. The use of R448A refrigerant, with a low Global Warming Potential (GWP), is part of an environmental sustainability perspective, helping to reduce the climate impact of the system. The aim of this research was to evaluate the performance of the system through a series of experiments conducted on a pilot plant for the freezing of fish products. Several operational variables were monitored and recorded, including evaporation temperature, condensation temperature, energy consumption, and freezing time of seafood products. The results of the experimental analysis highlighted the benefits deriving from the use of the modulating vapour injection scroll compressor with the R448A refrigerant. In particular, a significant reduction in energy consumption was recorded compared to conventional systems. The modulating capacity of the compressor made it possible to adapt the cold production to variations in the thermal load, ensuring optimal operation of the system and reducing energy waste. Furthermore, the use of an electronic expansion valve highlighted greater precision in the control of the evaporation temperature, with minimal deviation from the desired set point. This helped ensure better quality of the final product, reducing the risk of damage due to temperature changes and ensuring uniform freezing of the fish products. The freezing time of seafood has been significantly reduced thanks to the configuration of the entire system, allowing for faster production and greater production capacity of the plant. In conclusion, the use of a modulating vapour injection scroll compressor operating with R448A has proven effective in improving the performance of a system for freezing fish products. This technology offers an optimal balance between energy efficiency, temperature control, and environmental sustainability, making it an advantageous choice for food industries.Keywords: scroll compressor, vapor injection, refrigeration system, EER
Procedia PDF Downloads 45652 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India
Authors: Alok Saini, Rajkumar Ghosh
Abstract:
Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level
Procedia PDF Downloads 97651 Developing of Ecological Internal Insulation Composite Boards for Innovative Retrofitting of Heritage Buildings
Authors: J. N. Nackler, K. Saleh Pascha, W. Winter
Abstract:
WHISCERS™ (Whole House In-Situ Carbon and Energy Reduction Solution) is an innovative process for Internal Wall Insulation (IWI) for energy-efficient retrofitting of heritage building, which uses laser measuring to determine the dimensions of a room, off-site insulation board cutting and rapid installation to complete the process. As part of a multinational investigation consortium the Austrian part adapted the WHISCERS system to local conditions of Vienna where most historical buildings have valuable stucco facades, precluding the application of an external insulation. The Austrian project contribution addresses the replacement of commonly used extruded polystyrene foam (XPS) with renewable materials such as wood and wood products to develop a more sustainable IWI system. As the timber industry is a major industry in Austria, a new innovative and more sustainable IWI solution could also open up new markets. The first approach of investigation was the Life Cycle Assessment (LCA) to define the performance of wood fibre board as insulation material in comparison to normally used XPS-boards. As one of the results the global-warming potential (GWP) of wood-fibre-board is 15 times less the equivalent to carbon dioxide while in the case of XPS it´s 72 times more. The hygrothermal simulation program WUFI was used to evaluate and simulate heat and moisture transport in multi-layer building components of the developed IWI solution. The results of the simulations prove in examined boundary conditions of selected representative brickwork constructions to be functional and usable without risk regarding vapour diffusion and liquid transport in proposed IWI. In a further stage three different solutions were developed and tested (1 - glued/mortared, 2 - with soft board, connected to wall with gypsum board as top layer, 3 - with soft board and clay board as top layer). All three solutions presents a flexible insulation layer out of wood fibre towards the existing wall, thus compensating irregularities of the wall surface. From first considerations at the beginning of the development phase, three different systems had been developed and optimized according to assembly technology and tested as small specimen in real object conditions. The built prototypes are monitored to detect performance and building physics problems and to validate the results of the computer simulation model. This paper illustrates the development and application of the Internal Wall Insulation system.Keywords: internal insulation, wood fibre, hygrothermal simulations, monitoring, clay, condensate
Procedia PDF Downloads 219650 Specific Earthquake Ground Motion Levels That Would Affect Medium-To-High Rise Buildings
Authors: Rhommel Grutas, Ishmael Narag, Harley Lacbawan
Abstract:
Construction of high-rise buildings is a means to address the increasing population in Metro Manila, Philippines. The existence of the Valley Fault System within the metropolis and other nearby active faults poses threats to a densely populated city. The distant, shallow and large magnitude earthquakes have the potential to generate slow and long-period vibrations that would affect medium-to-high rise buildings. Heavy damage and building collapse are consequences of prolonged shaking of the structure. If the ground and the building have almost the same period, there would be a resonance effect which would cause the prolonged shaking of the building. Microzoning the long-period ground response would aid in the seismic design of medium to high-rise structures. The shear-wave velocity structure of the subsurface is an important parameter in order to evaluate ground response. Borehole drilling is one of the conventional methods of determining shear-wave velocity structure however, it is an expensive approach. As an alternative geophysical exploration, microtremor array measurements can be used to infer the structure of the subsurface. Microtremor array measurement system was used to survey fifty sites around Metro Manila including some municipalities of Rizal and Cavite. Measurements were carried out during the day under good weather conditions. The team was composed of six persons for the deployment and simultaneous recording of the microtremor array sensors. The instruments were laid down on the ground away from sewage systems and leveled using the adjustment legs and bubble level. A total of four sensors were deployed for each site, three at the vertices of an equilateral triangle with one sensor at the centre. The circular arrays were set up with a maximum side length of approximately four kilometers and the shortest side length for the smallest array is approximately at 700 meters. Each recording lasted twenty to sixty minutes. From the recorded data, f-k analysis was applied to obtain phase velocity curves. Inversion technique is applied to construct the shear-wave velocity structure. This project provided a microzonation map of the metropolis and a profile showing the long-period response of the deep sedimentary basin underlying Metro Manila which would be suitable for local administrators in their land use planning and earthquake resistant design of medium to high-rise buildings.Keywords: earthquake, ground motion, microtremor, seismic microzonation
Procedia PDF Downloads 468649 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 286648 The Influence of Atmospheric Air on the Health of the Population Living in Oil and Gas Production Area in Aktobe Region, Kazakhstan
Authors: Perizat Aitmaganbet, Kerbez Kimatova, Gulmira Umarova
Abstract:
As a result of medical check-up conducted in the framework of this research study an evaluation of the health status of the population living in the oil-producing regions, namely Sarkul and Kenkiyak villages in Aktobe was examined. With the help of the Spearman correlation, the connection between the level of hazard chemical elements in the atmosphere and the health of population living in the regions of oil and gas industry was estimated. Background & Objective. The oil and gas resource-extraction industries play an important role in improving the economic conditions of the Republic of Kazakhstan, especially for the oil-producing administrative regions. However, environmental problems may adversely affect the health of people living in that area. Thus, the aim of the study is to evaluate the exposure to negative environmental factors of the adult population living in Sarkul and Kenkiyak villages, the oil and gas producing areas in the Aktobe region. Methods. After conducting medical check-up among the population of Sarkul and Kenkiyak villages. A single cross-sectional study was conducted. The population consisted of randomly sampled 372 adults (181 males and 191 females). Also, atmospheric air probes were taken to measure the level of hazardous chemical elements in the air. The nonparametric method of the Spearman correlation analysis was performed between the mean concentration of substances exceeding the Maximum Permissible Concentration and the classes of newly diagnosed diseases. Selection and analysis of air samples were carried out according to the developed research protocol; the qualitative-quantitative analysis was carried out on the Gas analyzer HANK-4 apparatus. Findings. The medical examination of the population identified the following diseases: the first two dominant were diseases of the circulatory and digestive systems, in the 3rd place - diseases of the genitourinary system, and the nervous system and diseases of the ear and mastoid process were on the fourth and fifth places. Moreover, significant pollution of atmospheric air by carbon monoxide (MPC-5,0 mg/m3), benzapyrene (MPC-1mg/m3), dust (MPC-0,5 mg/m3) and phenol (МРС-0,035mg/m3) were identified in places. Correlation dependencies between these pollutants of air and the diseases of the population were established, as a result of diseases of the circulatory system (r = 0,7), ear and mastoid process (r = 0,7), nervous system (r = 0,6) and digestive organs(r = 0,6 ); between the concentration of carbon monoxide and diseases of the circulatory system (r = 0.6), the digestive system(r = 0.6), the genitourinary system (r = 0.6) and the musculoskeletal system; between nitric oxide and diseases of the digestive system (r = 0,7) and the circulatory system (r = 0,6); between benzopyrene and diseases of the digestive system (r = 0,6), the genitourinary system (r = 0,6) and the nervous system (r = 0,4). Conclusion. The positive correlation was found between air pollution and the health of the population living in Sarkul and Kenkiyak villages. To enhance the reliability of the results we are going to continue this study further.Keywords: atmospheric air, chemical substances, oil and gas, public health
Procedia PDF Downloads 114647 Reducing Falls in Memory Care through Implementation of the Stopping Elderly Accidents, Deaths, and Injuries Program
Authors: Cory B. Lord
Abstract:
Falls among the elderly population has become an area of concern in healthcare today. The negative impacts of falls lead to increased morbidity, mortality, and financial burdens for both patients and healthcare systems. Falls in the United States is reported at an annual rate of 36 million in those aged 65 and older. Each year, one out of four people in this age group will suffer a fall, with 20% of these falls causing injury. The setting for this Doctor of Nursing Practice (DNP) project was a memory care unit in an assisted living community, as these facilities house cognitively impaired older adults. These communities lack fall prevention programs; therefore, the need exists to add to the body of knowledge to positively impact this population. The objective of this project was to reduce fall rates through the implementation of the Center for Disease Control and Prevention (CDC) STEADI (stopping elderly accidents, deaths, and injuries) program. The DNP project performed was a quality improvement pilot study with a pre and post-test design. This program was implemented in the memory care setting over 12 weeks. The project included an educational session for staff and a fall risk assessment with appropriate resident referrals. The three aims of the DNP project were to reduce fall rates among the elderly aged 65 and older who reside in the memory care unit, increase staff knowledge of STEADI fall prevention measures after an educational session, and assess the willingness of memory care unit staff to adopt an evidence-based a fall prevention program. The Donabedian model was used as a guiding conceptual framework for this quality improvement pilot study. The fall rate data for 12 months before the intervention was evaluated and compared to post-intervention fall rates. The educational session comprised of a pre and post-test to assess staff knowledge of the fall prevention program and the willingness of staff to adopt the fall prevention program. The overarching goal was to reduce falls in the elderly population who live in memory care units. The results of the study showed, on average that the fall rate during the implementation period of STEADI (μ=6.79) was significantly lower when compared to the prior 12 months (μ= 9.50) (p=0.02, α = 0.05). The mean staff knowledge scores improved from pretest (μ=77.74%) to post-test (μ=87.42%) (p=0.00, α= 0.05) after the education session. The results of the willingness to adopt a fall prevention program were scored at 100%. In summation, implementing the STEADI fall prevention program can assist in reducing fall rates for residents aged 65 and older who reside in a memory care setting.Keywords: dementia, elderly, falls, STEADI
Procedia PDF Downloads 129646 Necessity for a Standardized Occupational Health and Safety Management System: An Exploratory Study from the Danish Offshore Wind Sector
Authors: Dewan Ahsan
Abstract:
Denmark is well ahead in generating electricity from renewable sources. The offshore wind sector is playing the pivotal role to achieve this target. Though there is a rapid growth of offshore wind sector in Denmark, still there is a dearth of synchronization in OHS (occupational health and safety) regulation and standards. Therefore, this paper attempts to ascertain: i) what are the major challenges of the company specific OHS standards? ii) why does the offshore wind industry need a standardized OHS management system? and iii) who can play the key role in this process? To achieve these objectives, this research applies the interview and survey techniques. This study has identified several key challenges in OHS management system which are; gaps in coordination and communication among the stakeholders, gaps in incident reporting systems, absence of a harmonized OHS standard and blame culture. Furthermore, this research has identified eleven key stakeholders who are actively involve with the offshore wind business in Denmark. As noticed, the relationships among these stakeholders are very complex specially between operators and sub-contractors. The respondent technicians are concerned with the compliance of various third-party OHS standards (e.g. ISO 31000, ISO 29400, Good practice guidelines by G+) which are applying by various offshore companies. On top of these standards, operators also impose their own OHS standards. From the technicians point of angle, many of these standards are not even specific for the offshore wind sector. So, it is a big challenge for the technicians and sub-contractors to comply with different company specific standards which also elevate the price of their services offer to the operators. For instance, when a sub-contractor is competing for a bidding, it must fulfill a number of OHS requirements (which demands many extra documantions) set by the individual operator and/the turbine supplier. According to sub-contractors’ point of view these extra works consume too much time to prepare the bidding documents and they also need to train their employees to pass the specific OHS certification courses to accomplish the demand for individual clients and individual project. The sub-contractors argued that in many cases these extra documentations and OHS certificates are inessential to ensure the quality service. So, a standardized OHS management procedure (which could be applicable for all the clients) can easily solve this problem. In conclusion, this study highlights that i) development of a harmonized OHS standard applicable for all the operators and turbine suppliers, ii) encouragement of technicians’ active participation in the OHS management, iii) development of a good safety leadership, and, iv) sharing of experiences among the stakeholders (specially operators-operators-sub contractors) are the most vital strategies to overcome the existing challenges and to achieve the goal of 'zero accident/harm' in the offshore wind industry.Keywords: green energy, offshore, safety, Denmark
Procedia PDF Downloads 214645 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System
Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal
Abstract:
The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.Keywords: microgravity effect, response surface, terminal speed, unmanned system
Procedia PDF Downloads 173644 Co-Creation of Content with the Students in Entrepreneurship Education to Capture Entrepreneurship Phenomenon in an Innovative Way
Authors: Prema Basargekar
Abstract:
Facilitating the subject ‘Entrepreneurship Education’ in higher education, such as management studies, can be exhilarating as well as challenging. It is a multi-disciplinary and ever-evolving subject. Capturing entrepreneurship as a phenomenon in a holistic manner is a daunting task as it requires covering various dimensions such as new ideas generation, entrepreneurial traits, business opportunities scanning, the role of policymakers, value creation, etc., to name a few. Implicit entrepreneurship theory and effectuation are two different theories that focus on engaging the participants to create content by using their own experiences, perceptions, and belief systems. It helps in understanding the phenomenon holistically. The assumption here is that all of us are part of the entrepreneurial ecosystem, and effective learning can come through active engagement and peer learning by all the participants together. The present study is an attempt to use these theories in the class assignment given to the students at the beginning of the course to build the course content and understand entrepreneurship as a phenomenon in a better way through peer learning. The assignment was given to three batches of MBA post-graduate students doing the program in one of the private business schools in India. The subject of ‘Entrepreneurship Management’ is facilitated in the third trimester of the first year. At the beginning of the course, the students were given the assignment to submit a brief write-up/ collage/picture/poem or in any other format about “What entrepreneurship means to you?” They were asked to give their candid opinions about entrepreneurship as a phenomenon as they perceive it. Nearly 156 students doing post-graduate MBA submitted the assignment. These assignments were further used to find answers to two research questions. – 1) Are students able to use divergent and innovative forms to express their opinions, such as poetry, illustrations, videos, etc.? 2) What are various dimensions of entrepreneurship which are emerging to understand the phenomenon in a better way? The study uses the Brawn and Clark framework of reflective thematic analysis for qualitative analysis. The study finds that students responded to this assignment enthusiastically and expressed their thoughts in multiple ways, such as poetry, illustration, personal narrative, videos, etc. The content analysis revealed that there could be seven dimensions to looking at entrepreneurship as a phenomenon. They are 1) entrepreneurial traits, 2) entrepreneurship as a journey, 3) value creation by entrepreneurs in terms of economic and social value, 4) entrepreneurial role models, 5) new business ideas and innovations, 6) personal entrepreneurial experiences and aspirations, and 7) entrepreneurial ecosystem. The study concludes that an implicit approach to facilitate entrepreneurship education helps in understanding it as a live phenomenon. It also encourages students to apply divergent and convergent thinking. It also helps in triggering new business ideas or stimulating the entrepreneurial aspirations of the students. The significance of the study lies in the application of implicit theories in the classroom to make higher education more engaging and effective.Keywords: co-creation of content, divergent thinking, entrepreneurship education, implicit theory
Procedia PDF Downloads 74643 Analytical Tools for Multi-Residue Analysis of Some Oxygenated Metabolites of PAHs (Hydroxylated, Quinones) in Sediments
Authors: I. Berger, N. Machour, F. Portet-Koltalo
Abstract:
Polycyclic aromatic hydrocarbons (PAHs) are toxic and carcinogenic pollutants produced in majority by incomplete combustion processes in industrialized and urbanized areas. After being emitted in atmosphere, these persistent contaminants are deposited to soils or sediments. Even if persistent, some can be partially degraded (photodegradation, biodegradation, chemical oxidation) and they lead to oxygenated metabolites (oxy-PAHs) which can be more toxic than their parent PAH. Oxy-PAHs are less measured than PAHs in sediments and this study aims to compare different analytical tools in order to extract and quantify a mixture of four hydroxylated PAHs (OH-PAHs) and four carbonyl PAHs (quinones) in sediments. Methodologies: Two analytical systems – HPLC with on-line UV and fluorescence detectors (HPLC-UV-FLD) and GC coupled to a mass spectrometer (GC-MS) – were compared to separate and quantify oxy-PAHs. Microwave assisted extraction (MAE) was optimized to extract oxy-PAHs from sediments. Results: First OH-PAHs and quinones were analyzed in HPLC with on-line UV and fluorimetric detectors. OH-PAHs were detected with the sensitive FLD, but the non-fluorescent quinones were detected with UV. The limits of detection (LOD)s obtained were in the range (2-3)×10-4 mg/L for OH-PAHs and (2-3)×10-3 mg/L for quinones. Second, even if GC-MS is not well adapted to the analysis of the thermodegradable OH-PAHs and quinones without any derivatization step, it was used because of the advantages of the detector in terms of identification and of GC in terms of efficiency. Without derivatization, only two of the four quinones were detected in the range 1-10 mg/L (LODs=0.3-1.2 mg/L) and LODs were neither very satisfying for the four OH-PAHs (0.18-0.6 mg/L). So two derivatization processes were optimized, comparing to literature: one for silylation of OH-PAHs, one for acetylation of quinones. Silylation using BSTFA/TCMS 99/1 was enhanced using a mixture of catalyst solvents (pyridine/ethyle acetate) and finding the appropriate reaction duration (5-60 minutes). Acetylation was optimized at different steps of the process, including the initial volume of compounds to derivatize, the added amounts of Zn (0.1-0.25 g), the nature of the derivatization product (acetic anhydride, heptafluorobutyric acid…) and the liquid/liquid extraction at the end of the process. After derivatization, LODs were decreased by a factor 3 for OH-PAHs and by a factor 4 for quinones, all the quinones being now detected. Thereafter, quinones and OH-PAHs were extracted from spiked sediments using microwave assisted extraction (MAE) followed by GC-MS analysis. Several mixtures of solvents of different volumes (10-25 mL) and using different extraction temperatures (80-120°C) were tested to obtain the best recovery yields. Satisfactory recoveries could be obtained for quinones (70-96%) and for OH-PAHs (70-104%). Temperature was a critical factor which had to be controlled to avoid oxy-PAHs degradation during the MAE extraction process. Conclusion: Even if MAE-GC-MS was satisfactory to analyze these oxy-PAHs, MAE optimization has to be carried on to obtain a most appropriate extraction solvent mixture, allowing a direct injection in the HPLC-UV-FLD system, which is more sensitive than GC-MS and does not necessitate a previous long derivatization step.Keywords: derivatizations for GC-MS, microwave assisted extraction, on-line HPLC-UV-FLD, oxygenated PAHs, polluted sediments
Procedia PDF Downloads 287642 Spatial Deictics in Face-to-Face Communication: Findings in Baltic Languages
Authors: Gintare Judzentyte
Abstract:
The present research is aimed to discuss semantics and pragmatics of spatial deictics (deictic adverbs of place and demonstrative pronouns) in the Baltic languages: in spoken Lithuanian and in spoken Latvian. The following objectives have been identified to achieve the aim: 1) to determine the usage of adverbs of place in spoken Lithuanian and Latvian and to verify their meanings in face-to-face communication; 2) to determine the usage of demonstrative pronouns in spoken Lithuanian and Latvian and to verify their meanings in face-to-face communication; 3) to compare the systems between the two spoken languages and to identify the main tendencies. As meanings of demonstratives (adverbs of place and demonstrative pronouns) are context-bound, it is necessary to verify their usage in spontaneous interaction. Besides, deictic gestures play a very important role in face-to-face communication. Therefore, an experimental method is necessary to collect the data. Video material representing spoken Lithuanian and spoken Latvian was recorded by means of the method of a qualitative interview (a semi-structured interview: an empirical research is all about asking right questions). The collected material was transcribed and evaluated taking into account several approaches: 1) physical distance (location of the referent, visual accessibility of the referent); 2) deictic gestures (the combination of language and gesture is especially characteristic of the exophoric use); 3) representation of mental spaces in physical space (a speaker sometimes wishes to mark something that is psychically close as psychologically distant and vice versa). The research of the collected data revealed that in face-to-face communication the participants choose deictic adverbs of place instead of demonstrative pronouns to locate/identify entities in situations where the demonstrative pronouns would be expected in spoken Lithuanian and in spoken Latvian. The analysis showed that visual accessibility of the referent is very important in face-to-face communication, but the main criterion while localizing objects and entities is the need for contrast: lith. čia ‘here’, šis ‘this’, latv. šeit ‘here’, šis ‘this’ usually identify distant entities and are used instead of distal demonstratives (lith. ten ‘there’, tas ‘that’, latv. tur ‘there’, tas ‘that’), because the referred objects/subjects contrast to further entities. Furthermore, the interlocutors in examples from a spontaneously situated interaction usually extend their space and can refer to a ‘distal’ object/subject with a ‘proximal’ demonstrative based on the psychological choice. As the research of the spoken Baltic languages confirmed, the choice of spatial deictics in face-to-face communication is strongly effected by a complex of criteria. Although there are some main tendencies, the exact meaning of spatial deictics in the spoken Baltic languages is revealed and is relevant only in a certain context.Keywords: Baltic languages, face-to-face communication, pragmatics, semantics, spatial deictics
Procedia PDF Downloads 289641 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 351640 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 186639 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid
Authors: Benjamin Blat Belmonte, Stephan Rinderknecht
Abstract:
The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market
Procedia PDF Downloads 74638 Upward Spread Forced Smoldering Phenomenon: Effects and Applications
Authors: Akshita Swaminathan, Vinayak Malhotra
Abstract:
Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering
Procedia PDF Downloads 144637 Examining Smallholder Farmers’ Perceptions of Climate Change and Barriers to Strategic Adaptation in Todee District, Liberia
Authors: Joe Dorbor Wuokolo
Abstract:
Thousands of smallholder farmers in Todee District, Montserrado county, are currently vulnerable to the negative impact of climate change. The district, which is the agricultural hot spot for the county, is faced with unfavorable changes in the daily temperature due to climate change. Farmers in the district have observed a dramatic change in the ratio of rainfall to sunshine, which has caused a chilling effect on their crop yields. However, there is a lack of documentation regarding how farmers perceive and respond to these changes and challenges. A study was conducted in the region to examine the perceptions of smallholder farmers regarding the negative impact of climate change, the adaptation strategies practice, and the barriers that hinder the process of advancing adaptation strategy. On purpose, a sample of 41 respondents from five towns was selected, including five town chiefs, five youth leaders, five women leaders, and sixteen community members. Women and youth leaders were specifically chosen to provide gender balance and enhance the quality of the investigation. Additionally, to validate the barriers farmers face during adaptation to climate change, this study interviewed eight experts from local and international organizations and government ministries and agencies involved in climate change and agricultural programs on what they perceived as the major barrier in both local and national level that impede farmers adaptation to climate change impact. SPSS was used to code the data, and descriptive statistics were used to analyze the data. The weighted average index (WAI) was used to rank adaptation strategies and the perceived importance of adaptation practices among farmers. On a scale from 0 to 3, 0 indicates the least important technique, and 3 indicates the most effective technique. In addition, the Problem Confrontation Index (PCI) was used to rank the barriers that prevented farmers from implementing adaptation measures. According to the findings, approximately 60% of all respondents considered the use of irrigation systems to be the most effective adaptation strategy, with drought-resistant varieties making up 30% of the total. Additionally, 80% of respondents placed a high value on drought-resistant varieties, while 63% percent placed it on irrigation practices. In addition, 78% of farmers ranked and indicated that unpredictability of the weather is the most significant barrier to their adaptation strategies, followed by the high cost of farm inputs and lack of access to financing facilities. 80% of respondents believe that the long-term changes in precipitation (rainfall) and temperature (hotness) are accelerating. This suggests that decision-makers should adopt policies and increase the capacity of smallholder farmers to adapt to the negative impact of climate change in order to ensure sustainable food production.Keywords: adaptation strategies, climate change, farmers’ perception, smallholder farmers
Procedia PDF Downloads 82636 A Study on the Acquisition of Chinese Classifiers by Vietnamese Learners
Authors: Quoc Hung Le Pham
Abstract:
In the field of language study, classifier is an interesting research feature. In the world’s languages, some languages have classifier system, some do not. Mandarin Chinese and Vietnamese languages are a rich classifier system, however, because of the language system, the cognitive, cultural differences, so that the syntactic structure of classifier of them also dissimilar. When using Mandarin Chinese classifiers must collocate with nouns or verbs, in the lexical category it is not like nouns or verbs, belong to the open class. But some scholars believe that Mandarin Chinese measure words are similar to English and other Indo European languages. The word hanging on the structure and word formation (suffix), is a closed class. Compared to other languages, such as Chinese, Vietnamese, Thai and other Asian languages are still belonging to the classifier language’s second type, this type of language is classifier, it is in the majority of quantity must exist, and following deictic, anaphoric or quantity appearing together, not separation between its modified noun, also known as numeral classifier language. Main syntactic structure of Chinese classifiers are as follows: ‘quantity+measure+noun’, ‘pronoun+measure+noun’, ‘pronoun+quantity+measure+noun’, ‘prefix+quantity+measure +noun’, ‘quantity +adjective + measure +noun’, ‘ quantity (above 10 whole number), + duo (多)measure +noun’, ‘ quantity (around 10) + measure + duo (多) +noun’. Main syntactic structure of Vietnamese classifiers are: ‘quantity+measure+noun’, ‘ measure+noun+pronoun’, ‘quantity+measure+noun+pronoun’, ‘measure+noun+prefix+ quantity’, ‘quantity+measure+noun+adjective', ‘duo (多) +quanlity+measure+noun’, ‘quantity+measure+adjective+pronoun (quantity word could not be 1)’, ‘measure+adjective+pronoun’, ‘measure+pronoun’. In daily life, classifiers are commonly used, if Chinese learners failed to standardize this using catergory, because the negative impact might occur on their verbal communication. The richness of the Chinese classifier system contributes to the complexity in the study of the system by foreign learners, especially in the inter language of Vietnamese learners. As above mentioned, Vietnamese language also has a rich system of classifiers, however, the basic structure order of two languages are similar but both still have differences. These similarities and dissimilarities between Chinese and Vietnamese classifier systems contribute significantly to the common errors made by Vietnamese students while they acquire Chinese, which are distinct from the errors made by students from the other language background. This article from a comparative perspective of language, has an orientation towards Chinese and Vietnamese languages commonly used in classifiers semantics and structural form two aspects. This comparative study aims to identity Vietnamese students while learning Chinese classifiers may face some negative transference of mother language, beside that through the analysis of the classifiers questionnaire, find out the causes and patterns of the errors they made. As the preliminary analysis shows, Vietnamese students while learning Chinese classifiers made some errors such as: overuse classifier ‘ge’(个); misuse the other classifiers ‘*yi zhang ri ji’(yi pian ri ji), ‘*yi zuo fang zi’(yi jian fang zi), ‘*si zhang jin pai’(si mei jin pai); homonym words ‘dui, shuang, fu, tao’ (对、双、副、套), ‘ke, li’ (颗、粒).Keywords: acquisition, classifiers, negative transfer, Vietnamse learners
Procedia PDF Downloads 452635 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142634 Planning a European Policy for Increasing Graduate Population: The Conditions That Count
Authors: Alice Civera, Mattia Cattaneo, Michele Meoli, Stefano Paleari
Abstract:
Despite the fact that more equal access to higher education has been an objective public policy for several decades, little is known about the effectiveness of alternative means for achieving such goal. Indeed, nowadays, high level of graduate population can be observed both in countries with the high and low level of fees, or high and low level of public expenditure in higher education. This paper surveys the extant literature providing some background on the economic concepts of the higher education market, and reviews key determinants of demand and supply. A theoretical model of aggregate demand and supply of higher education is derived, with the aim to facilitate the understanding of the challenges in today’s higher education systems, as well as the opportunities for development. The model is validated on some exemplary case studies describing the different relationship between the level of public investment and levels of graduate population and helps to derive general implications. In addition, using a two-stage least squares model, we build a macroeconomic model of supply and demand for European higher education. The model allows interpreting policies shifting either the supply or the demand for higher education, and allows taking into consideration contextual conditions with the aim of comparing divergent policies under a common framework. Results show that the same policy objective (i.e., increasing graduate population) can be obtained by shifting either the demand function (i.e., by strengthening student aid) or the supply function (i.e., by directly supporting higher education institutions). Under this theoretical perspective, the level of tuition fees is irrelevant, and empirically we can observe high levels of graduate population in both countries with high (i.e., the UK) or low (i.e., Germany) levels of tuition fees. In practice, this model provides a conceptual framework to help better understanding what are the external conditions that need to be considered, when planning a policy for increasing graduate population. Extrapolating a policy from results in different countries, under this perspective, is a poor solution when contingent factors are not addressed. The second implication of this conceptual framework is that policies addressing the supply or the demand function needs to address different contingencies. In other words, a government aiming at increasing graduate population needs to implement complementary policies, designing them according to the side of the market that is interested. For example, a ‘supply-driven’ intervention, through the direct financial support of higher education institutions, needs to address the issue of institutions’ moral hazard, by creating incentives to supply higher education services in efficient conditions. By contrast, a ‘demand-driven’ policy, providing student aids, need to tackle the students’ moral hazard, by creating an incentive to responsible behavior.Keywords: graduates, higher education, higher education policies, tuition fees
Procedia PDF Downloads 166633 Evaluating the Effect of 'Terroir' on Volatile Composition of Red Wines
Authors: María Luisa Gonzalez-SanJose, Mihaela Mihnea, Vicente Gomez-Miguel
Abstract:
The zoning methodology currently recommended by the OIVV as official methodology to carry out viticulture zoning studies and to define and delimit the ‘terroirs’ has been applied in this study. This methodology has been successfully applied on the most significant an important Spanish Oenological D.O. regions, such as Ribera de Duero, Rioja, Rueda and Toro, but also it have been applied around the world in Portugal, different countries of South America, and so on. This is a complex methodology that uses edaphoclimatic data but also other corresponding to vineyards and other soils’ uses The methodology is useful to determine Homogeneous Soil Units (HSU) to different scale depending on the interest of each study, and has been applied from viticulture regions to particular vineyards. It seems that this methodology is an appropriate method to delimit correctly the medium in order to enhance its uses and to obtain the best viticulture and oenological products. The present work is focused on the comparison of volatile composition of wines made from grapes grown in different HSU that coexist in a particular viticulture region of Castile-Lion cited near to Burgos. Three different HSU were selected for this study. They represented around of 50% of the global area of vineyards of the studied region. Five different vineyards on each HSU under study were chosen. To reduce variability factors, other criteria were also considered as grape variety, clone, rootstocks, vineyard’s age, training systems and cultural practices. This study was carried out during three consecutive years, then wine from three different vintage were made and analysed. Different red wines were made from grapes harvested in the different vineyards under study. Grapes were harvested to ‘Technological maturity’, which are correlated with adequate levels of sugar, acidity, phenolic content (nowadays named phenolic maturity), good sanitary stages and adequate levels of aroma precursors. Results of the volatile profile of the wines produced from grapes of each HSU showed significant differences among them pointing out a direct effect of the edaphoclimatic characteristic of each UHT on the composition of the grapes and then on the volatile composition of the wines. Variability induced by HSU co-existed with the well-known inter-annual variability correlated mainly with the specific climatic conditions of each vintage, however was most intense, so the wine of each HSU were perfectly differenced. A discriminant analysis allowed to define the volatiles with discriminant capacities which were 21 of the 74 volatiles analysed. Detected discriminant volatiles were chemical different, although .most of them were esters, followed by were superior alcohols and fatty acid of short chain. Only one lactone and two aldehydes were selected as discriminant variable, and no varietal aroma compounds were selected, which agree with the fact that all the wine were made from the same grape variety.Keywords: viticulture zoning, terroir, wine, volatile profile
Procedia PDF Downloads 221632 Poly(Acrylamide-Co-Itaconic Acid) Nanocomposite Hydrogels and Its Use in the Removal of Lead in Aqueous Solution
Authors: Majid Farsadrouh Rashti, Alireza Mohammadinejad, Amir Shafiee Kisomi
Abstract:
Lead (Pb²⁺), a cation, is a prime constituent of the majority of the industrial effluents such as mining, smelting and coal combustion, Pb-based painting and Pb containing pipes in water supply systems, paper and pulp refineries, printing, paints and pigments, explosive manufacturing, storage batteries, alloy and steel industries. The maximum permissible limit of lead in the water used for drinking and domesticating purpose is 0.01 mg/L as advised by Bureau of Indian Standards, BIS. This becomes the acceptable 'safe' level of lead(II) ions in water beyond which, the water becomes unfit for human use and consumption, and is potential enough to lead health problems and epidemics leading to kidney failure, neuronal disorders, and reproductive infertility. Superabsorbent hydrogels are loosely crosslinked hydrophilic polymers that in contact with aqueous solution can easily water and swell to several times to their initial volume without dissolving in aqueous medium. Superabsorbents are kind of hydrogels capable to swell and absorb a large amount of water in their three-dimensional networks. While the shapes of hydrogels do not change extensively during swelling, because of tremendously swelling capacity of superabsorbent, their shape will broadly change.Because of their superb response to changing environmental conditions including temperature pH, and solvent composition, superabsorbents have been attracting in numerous industrial applications. For instance, water retention property and subsequently. Natural-based superabsorbent hydrogels have attracted much attention in medical pharmaceutical, baby diapers, agriculture, and horticulture because of their non-toxicity, biocompatibility, and biodegradability. Novel superabsorbent hydrogel nanocomposites were prepared by graft copolymerization of acrylamide and itaconic acid in the presence of nanoclay (laponite), using methylene bisacrylamide (MBA) and potassium persulfate, former as a crosslinking agent and the second as an initiator. The superabsorbent hydrogel nanocomposites structure was characterized by FTIR spectroscopy, SEM and TGA Spectroscopy adsorption of metal ions on poly (AAm-co-IA). The equilibrium swelling values of copolymer was determined by gravimetric method. During the adsorption of metal ions on polymer, residual metal ion concentration in the solution and the solution pH were measured. The effects of the clay content of the hydrogel on its metal ions uptake behavior were studied. The NC hydrogels may be considered as a good candidate for environmental applications to retain more water and to remove heavy metals.Keywords: adsorption, hydrogel, nanocomposite, super adsorbent
Procedia PDF Downloads 187631 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 66630 Sustainable Development Goal (SDG)-Driven Intercultural Citizenship Education through Dance-Fitness Development: A Classroom Research Project Based on History Research into Japanese Traditional Performing Art (Menburyu)
Authors: Stephanie Ann Houghton
Abstract:
SDG-driven intercultural citizenship education through performing arts and history research, combined with dance-fitness development inspired by performing arts, can provide a third space in which performing arts, local history, and contemporary society drive educational and social development, supporting the performing arts in student-generated ways, reflecting their sense, priorities, and goals. Within a string of rugged volcanic peninsulas along the north-western coastline of the Ariake Sea, Kyushu, southern Japan, are found a range of traditional performing arts endangered in Japan’s ageing society, including Menburyu mask dance. From 2017, Menburyu culture and history were explored with Menburyu veterans and students within Houghton’s FURYU Educational Program (FEP) at Saga University. Through collaboration with professional fitness instructor Kazuki Miyata, basic Menburyu movements and concepts were blended into aerobics routines to generate Menburyu-Inspired Dance-Fitness (MIDF). Drawing on history, legends, and myths, three important storylines for understanding Menburyu, captured in students’ bilingual (English/Japanese) exhibition panels, emerged: harvest, demons and gods, and the Battle of Tadenawate 1530. Houghton and Miyata performed the first MIDF routine at the 22nd Traditional Performing Arts Festival at Yutoku Inari Shrine, Kashima, in September 2019. FEP exhibitions, dance-fitness events, and MIDF performance have been reported in the media locally and nationally. In an action research case study, a classroom research project was conducted with four female Japanese students over fifteen three-hour online lessons (April-July 2020). Part 1 of each lesson focused on Menburyu history. This included a guest lecture by Kensuke Ryuzoji. The three Menburyu storylines served as keys for exploring Menburyu history from international standpoints.Part 2 focused on the development of MIDF basic steps and an online MIDF event with outside guests. Through post-lesson reflective diaries and reports/videos documenting their experience, students engaged in heritage management, intercultural dialogue, health/fitness, technology and art generation activities within the FEP, centring on UN Sustainable Development Goals (SDGs) including health and wellness (SDG3), and quality education (SDG4), taking a glocal approach. In this presentation, qualitative analysis of student-generated reflective diary and reports will be presented to reveal educational processes, learning outcomes,and apparent areas of (potential) social impact of this classroom research project. Data will be presented in two main parts: (1) The mutually beneficial relationship between local traditional performing arts research and local history researchwill be addressed. One has the power both inform and illuminate the other given their deep connections. This can drive the development of students’ intercultural history competence related to and through the performing arts. (2) The development of dance-fitness inspired by traditional performing arts provides a third space in which performing arts, local history and contemporary society can be connected through SDG-driven education inside the classroom in ways that can also drive social innovation outside the classroom, potentially supporting the performing arts itself in student-generated ways, reflecting their own sense, priorities and social goals. Links will be drawn with intercultural citizenship, strengths and weaknesses of this teaching approach will be highlighted, and avenues for future research in this exciting new area will be suggested.Keywords: cultural traditions, dance-fitness performance and participation, intercultural communication approach, mask dance origins
Procedia PDF Downloads 140629 Single-parent Families and the Criminal Ramifications on Children in the United Kingdom; A Systematic Review
Authors: Naveed Ali
Abstract:
Under the construct of the ‘traditional family’ set-up (male and female parent) in the United Kingdom, the absence of a male parental figure remains a critical factor associated with an elevated risk of criminal behavior among youths. Empirical evidence suggests that father absence significantly correlates with increased rates of juvenile delinquency and criminality. For instance, data reveals that approximately 63% of young offenders in the United Kingdom originate from single-parent households, predominantly those without a father. Moreover, research displays that boys from father-absent homes are three times more likely to exhibit antisocial behavior compared to their peers from two-parent families. This absence can negatively impact educational attainment, with children from fatherless homes being twice as likely to leave school prematurely, thereby increasing their vulnerability to peer influence and gang affiliation- key pathways into criminal activities. Both legal frameworks and social policies in the United Kingdom acknowledge the pivotal role of family stability in crime prevention. Initiatives including parenting support programs, community-based interventions, and targeted youth services seek to address the challenges faced by single-parent families and mitigate the criminogenic effects of father absence. Despite these efforts, persistent challenges remain, including the need to address the broader socioeconomic determinants of family instability and to refine legal strategies that effectively address the root causes of youth offending linked to the absence of a male parental figure. A nuanced understanding of these dynamics is essential for developing more effective legal and social interventions aimed at reducing juvenile delinquency and supporting at-risk populations within the United Kingdom. This paper will highlight the significant impact of the absence of a male parental figure on youth crime rates in the United Kingdom, underlining the need for enhanced legal and social responses. By examining the interplay between family structure and juvenile offending, the paper will underline the importance of developing more comprehensive interventions that address both familial factors and the wider socioeconomic context. The findings aim to guide policymakers and practitioners in creating more effective strategies to reduce youth crime, ultimately strengthening support systems for vulnerable families and mitigating the adverse effects of father absence on young individuals.Keywords: criminality, family law, legal framework, the united kingdom perspective
Procedia PDF Downloads 28