Search results for: automatic weather station
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2352

Search results for: automatic weather station

1752 Induced Emotional Empathy and Contextual Factors like Presence of Others Reduce the Negative Stereotypes Towards Persons with Disabilities through Stronger Prosociality

Authors: Shailendra Kumar Mishra

Abstract:

In this paper, we focus on how contextual factors like the physical presence of other perceivers and then developed induced emotional empathy towards a person with disabilities may reduce the automatic negative stereotypes and then response towards that person. We demonstrated in study 1 that negative attitude based on negative stereotypes assessed on ATDP-test questionnaires on five points Linkert-scale are significantly less negative when participants were tested with a group of perceivers and then tested alone separately by applying 3 (positive, indifferent, and negative attitude levels) X 2 (physical presence condition and alone) factorial design of ANOVA test. In the second study, we demonstrate, by applying regression analysis, in the presence of other perceivers, whether in a small group, participants showed more induced emotional empathy through stronger prosociality towards a high distress target like a person with disabilities in comparison of that of other stigmatized persons such as racial biased or gender-biased people. Thus results show that automatic affective response in the form of induced emotional empathy in perceiver and contextual factors like the presence of other perceivers automatically activate stronger prosocial norms and egalitarian goals towards physically challenged persons in comparison to other stigmatized persons like racial or gender-biased people. This leads to less negative attitudes and behaviour towards a person with disabilities.

Keywords: contextual factors, high distress target, induced emotional empathy, stronger prosociality

Procedia PDF Downloads 138
1751 Empowering Transformers for Evidence-Based Medicine

Authors: Jinan Fiaidhi, Hashmath Shaik

Abstract:

Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.

Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers

Procedia PDF Downloads 43
1750 Effects of a Simulated Power Cut in Automatic Milking Systems on Dairy Cows Heart Activity

Authors: Anja Gräff, Stefan Holzer, Manfred Höld, Jörn Stumpenhausen, Heinz Bernhardt

Abstract:

In view of the increasing quantity of 'green energy' from renewable raw materials and photovoltaic facilities, it is quite conceivable that power supply variations may occur, so that constantly working machines like automatic milking systems (AMS) may break down temporarily. The usage of farm-made energy is steadily increasing in order to keep energy costs as low as possible. As a result, power cuts are likely to happen more frequently. Current work in the framework of the project 'stable 4.0' focuses on possible stress reactions by simulating power cuts up to four hours in dairy farms. Based on heart activity it should be found out whether stress on dairy cows increases under these circumstances. In order to simulate a power cut, 12 random cows out of 2 herds were not admitted to the AMS for at least two hours on three consecutive days. The heart rates of the cows were measured and the collected data evaluated with HRV Program Kubios Version 2.1 on the basis of eight parameters (HR, RMSSD, pNN50, SD1, SD2, LF, HF and LF/HF). Furthermore, stress reactions were examined closely via video analysis, milk yield, ruminant activity, pedometer and measurements of cortisol metabolites. Concluding it turned out, that during the test only some animals were suffering from minor stress symptoms, when they tried to get into the AMS at their regular milking time, but couldn´t be milked because the system was manipulated. However, the stress level during a regular “time-dependent milking rejection” was just as high. So the study comes to the conclusion, that the low psychological stress level in the case of a 2-4 hours failure of an AMS does not have any impact on animal welfare and health.

Keywords: dairy cow, heart activity, power cut, stable 4.0

Procedia PDF Downloads 311
1749 Media Coverage of Cervical Cancer in Malawi: A National Sample of Newspapers and a Radio Station

Authors: Elida Tafupenji Kamanga

Abstract:

Cancer of the cervix remains one of the high causes of death among Malawian women. Despite the government introduction of free screening services throughout the country, patronage still remains low and lack of knowledge high. Given the critical role mass media plays in relaying different information to the public including health and its influence on health behaviours, the study sought to analyse Malawi media coverage of the disease and its effectiveness. The findings of the study will help inform media advocacy directed at changing any coverage impeding the effective dissemination of cervical cancer message which consequently will help increase awareness and accessing of screening behaviours among women. A content analysis of 29 newspapers and promotional messages on cervical from a local radio station was conducted for the period from 2012 to 2015. Overall the results showed media coverage in terms of content and frequency increased for the four-year period. However, of concern was the quality of information both media presented to the public. The lapse in information provided means there is little education taking place through the media which could be contributing to the knowledge gap the women have thereby affecting their decision to screen. Also lack of adequate funding to media institutions and lack of collaboration between media institutions and stakeholders involved in the fight against the disease were noted as other contributing factors to low coverage of the disease. Designing messages that are not only informative and educative but also innovative may help increase awareness; improve the knowledge gap and potential adoption of preventive screening behaviour by Malawian women. Conversely, good communication between the media institutions and researchers involved in the fight against the disease through the channelling of new findings back to the public as well as increasing funding towards similar cause should be considered.

Keywords: cervical cancer, effectiveness, media coverage, screening

Procedia PDF Downloads 197
1748 Cost Overrun in Construction Projects

Authors: Hailu Kebede Bekele

Abstract:

Construction delays are suitable where project events occur at a certain time expected due to causes related to the client, consultant, and contractor. Delay is the major cause of the cost overrun that leads to the poor efficiency of the project. The cost difference between completion and the originally estimated is known as cost overrun. The common ways of cost overruns are not simple issues that can be neglected, but more attention should be given to prevent the organization from being devastated to be failed, and financial expenses to be extended. The reasons that may raised in different studies show that the problem may arise in construction projects due to errors in budgeting, lack of favorable weather conditions, inefficient machinery, and the availability of extravagance. The study is focused on the pace of mega projects that can have a significant change in the cost overrun calculation.15 mega projects are identified to study the problem of the cost overrun in the site. The contractor, consultant, and client are the principal stakeholders in the mega projects. 20 people from each sector were selected to participate in the investigation of the current mega construction project. The main objective of the study on the construction cost overrun is to prioritize the major causes of the cost overrun problem. The methodology that was employed in the construction cost overrun is the qualitative methodology that mostly rates the causes of construction project cost overrun. Interviews, open-ended and closed-ended questions group discussions, and rating qualitative methods are the best methodologies to study construction projects overrun. The result shows that design mistakes, lack of labor, payment delay, old equipment and scheduling, weather conditions, lack of skilled labor, payment delays, transportation, inflation, and order variations, market price fluctuation, and people's thoughts and philosophies, the prior cause of the cost overrun that fail the project performance. The institute shall follow the scheduled activities to bring a positive forward in the project life.

Keywords: cost overrun, delay, mega projects, design

Procedia PDF Downloads 62
1747 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 386
1746 Municipal Solid Waste Management in an Unplanned Hill Station in India

Authors: Moanaro Ao, Nzanthung Ngullie

Abstract:

Municipal solid waste management (MSWM) has unique challenges in hilly urban settlements. Efforts have been taken by municipalities, private players, non-governmental organizations, etc. for managing solid waste by preventing its generation, reusing, and recovering them into useful products to the extent possible, thereby minimizing its impact on the environment and human health. However, there are many constraints that lead to inadequate management of solid waste. Kohima is an unplanned hill station city in the North Eastern Region of India. The city is facing numerous issues due to the mismanagement of the MSW generated. Kohima Municipal Council (KMC) is the Urban Local Body (ULB) responsible for providing municipal services. The present MSWM system in Kohima comprises of collection, transportation, and disposal of waste without any treatment. Several efforts and experimental projects on waste management have been implemented without any success. Waste management in Kohima city is challenging due to its remote location, difficult topography, dispersed settlements within the city, sensitive ecosystem, etc. Furthermore, the narrow road network in Kohima with limited scope for expansion, inadequate infrastructure facilities, and financial constraints of the ULB add up to the problems faced in managing solid waste. This hill station also has a unique system of traditional local self-governance. Thus, shifting from a traditional system to a modern system in implementing systematic and scientific waste management is also a challenge in itself. This study aims to analyse the existing situation of waste generation, evaluate the effectiveness of the existing management system of MSW, and evolve a strategic approach to achieve a sustainable and resilient MSWM system. The results from the study show that a holistic approach, including social aspects, technical aspects, environmental aspects, and financial aspects, is needed to reform the MSWM system. Stringent adherence to source segregation is required by encouraging public participation through awareness programs. Active involvement of community-based organizations (CBOs) has brought a positive change in sensitizing the public. A waste management model was designed to be adopted at a micro-level such as composting household biodegradable waste and incinerator plants at the community level for non-biodegradable waste. Suitable locations for small waste stations were identified using geographical information system (GIS) tools for waste recovery and recycling. Inculcating the sense of responsibility in every waste generator towards waste management by implementing incentive-based strategies at the Ward level was explored. Initiatives based on the ‘polluters pay principle’ were also explored to make the solid waste management model “self-sustaining”.

Keywords: municipal solid waste management, public participation, source segregation, sustainable

Procedia PDF Downloads 68
1745 Glacier Dynamics and Mass Fluctuations in Western Himalayas: A Comparative Analysis of Pir-Panjal and Greater Himalayan Ranges in Jhelum Basin, India

Authors: Syed Towseef Ahmad, Fatima Amin, Pritha Acharya, Anil K. Gupta, Pervez Ahmad

Abstract:

Glaciers being the sentinels of climate change, are the most visible evidence of global warming. Given the unavailability of observed field-based data, this study has focussed on the use of geospatial techniques to obtain information about the glaciers of Pir-Panjal (PPJ) and the Great Himalayan Regions of Jhelum Basin (GHR). These glaciers need to be monitored in line with the variations in climatic conditions because they significantly contribute to various sectors in the region. The main aim of this study is to map the glaciers in the two adjacent regions (PPJ and GHR) in the north-western Himalayas with different topographies and compare the changes in various glacial attributes during two different time periods (1990-2020). During the last three decades, both PPJ as well as GHR regions have observed deglaciation of around 36 and 26 percent, respectively. The mean elevation of GHR glaciers has increased from 4312 to 4390 masl, while the same for PPJ glaciers has increased from 4085 to 4124 masl during the observation period. Using accumulation area ratio (AAR) method, mean mass balance of -34.52 and -37.6 cm.w.e was recorded for the glaciers of GHR and PPJ, respectively. The difference in areal and mass loss of glaciers in these regions may be due to (i) the smaller size of PPJ glaciers which are all smaller than 1 km² and are thus more responsive to climate change (ii) Higher mean elevation of GHR glaciers (iii) local variations in climatic variables in these glaciated regions. Time series analysis of climate variables indicates that both the mean maximum and minimum temperatures of Qazigund station (Tmax= 19.2, Tmin= 6.4) are comparatively higher than the Pahalgam station (Tmax= 18.8, Tmin= 3.2). Except for precipitation in Qazigund (Slope= - 0.3 mm a⁻¹), each climatic parameter has shown an increasing trend during these three decades, and with the slope of 0.04 and 0.03°c a⁻¹, the positive trend in Tmin (pahalgam) and Tmax (qazigund) are observed to be statistically significant (p≤0.05).

Keywords: glaciers, climate change, Pir-Panjal, greater Himalayas, mass balance

Procedia PDF Downloads 88
1744 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs

Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare

Abstract:

The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.

Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio

Procedia PDF Downloads 98
1743 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation

Authors: A. Bensaid, T. Mostephaoui, R. Nedjai

Abstract:

A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.

Keywords: land development, GIS, segmentation, remote sensing

Procedia PDF Downloads 155
1742 The Influence of Travel Experience within Perceived Public Transport Quality

Authors: Armando Cartenì, Ilaria Henke

Abstract:

The perceived public transport quality is an important driver that influences both customer satisfaction and mobility choices. The competition among transport operators needs to improve the quality of the services and identify which attributes are perceived as relevant by passengers. Among the “traditional” public transport quality attributes there are, for example: travel and waiting time, regularity of the services, and ticket price. By contrast, there are some “non-conventional” attributes that could significantly influence customer satisfaction jointly with the “traditional” ones. Among these, the beauty/aesthetics of the transport terminals (e.g. rail station and bus terminal) is probably one of the most impacting on user perception. Starting from these considerations, the point stressed in this paper was if (and how munch) the travel experience of the overall travel (e.g. how long is the travel, how many transport modes must be used) influences the perception of the public transport quality. The aim of this paper was to investigate the weight of the terminal quality (e.g. aesthetic, comfort and service offered) within the overall travel experience. The case study was the extra-urban Italian bus network. The passengers of the major Italian terminal bus were interviewed and the analysis of the results shows that about the 75% of the travelers, are available to pay up to 30% more for the ticket price for having a high quality terminal. A travel experience effect was observed: the average perceived transport quality varies with the characteristic of the overall trip. The passengers that have a “long trip” (travel time greater than 2 hours) perceived as “low” the overall quality of the trip even if they pass through a high quality terminal. The opposite occurs for the “short trip” passengers. This means that if a traveler passes through a high quality station, the overall perception of that terminal could be significantly reduced if he is tired from a long trip. This result is important and if confirmed through other case studies, will allow to conclude that the “travel experience impact" must be considered as an explicit design variable for public transport services and planning.

Keywords: transportation planning, sustainable mobility, decision support system, discrete choice model, design problem

Procedia PDF Downloads 298
1741 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 261
1740 Evaluation of Simulated Noise Levels through the Analysis of Temperature and Rainfall: A Case Study of Nairobi Central Business District

Authors: Emmanuel Yussuf, John Muthama, John Ng'ang'A

Abstract:

There has been increasing noise levels all over the world in the last decade. Many factors contribute to this increase, which is causing health related effects to humans. Developing countries are not left out of the whole picture as they are still growing and advancing their development. Motor vehicles are increasing on urban roads; there is an increase in infrastructure due to the rising population, increasing number of industries to provide goods and so many other activities. All this activities lead to the high noise levels in cities. This study was conducted in Nairobi’s Central Business District (CBD) with the main objective of simulating noise levels in order to understand the noise exposed to the people within the urban area, in relation to weather parameters namely temperature, rainfall and wind field. The study was achieved using the Neighbourhood Proximity Model and Time Series Analysis, with data obtained from proxies/remotely-sensed from satellites, in order to establish the levels of noise exposed to which people of Nairobi CBD are exposed to. The findings showed that there is an increase in temperature (0.1°C per year) and a decrease in precipitation (40 mm per year), which in comparison to the noise levels in the area, are increasing. The study also found out that noise levels exposed to people in Nairobi CBD were roughly between 61 and 63 decibels and has been increasing, a level which is high and likely to cause adverse physical and psychological effects on the human body in which air temperature, precipitation and wind contribute so much in the spread of noise. As a noise reduction measure, the use of sound proof materials in buildings close to busy roads, implementation of strict laws to most emitting sources as well as further research on the study was recommended. The data used for this study ranged from the year 2000 to 2015, rainfall being in millimeters (mm), temperature in degrees Celsius (°C) and the urban form characteristics being in meters (m).

Keywords: simulation, noise exposure, weather, proxy

Procedia PDF Downloads 379
1739 Analysis and Identification of Trends in Electric Vehicle Crash Data

Authors: Cody Stolle, Mojdeh Asadollahipajouh, Khaleb Pafford, Jada Iwuoha, Samantha White, Becky Mueller

Abstract:

Battery-electric vehicles (BEVs) are growing in sales and popularity in the United States as an alternative to traditional internal combustion engine vehicles (ICEVs). BEVs are generally heavier than corresponding models of ICEVs, with large battery packs located beneath the vehicle floorpan, a “skateboard” chassis, and have front and rear crush space available in the trunk and “frunk” or front trunk. The geometrical and frame differences between the vehicles may lead to incompatibilities with gasoline vehicles during vehicle-to-vehicle crashes as well as run-off-road crashes with roadside barriers, which were designed to handle lighter ICEVs with higher centers-of-mass and with dedicated structural chasses. Crash data were collected from 10 states spanning a five-year period between 2017 and 2021. Vehicle Identification Number (VIN) codes were processed with the National Highway Traffic Safety Administration (NHTSA) VIN decoder to extract BEV models from ICEV models. Crashes were filtered to isolate only vehicles produced between 2010 and 2021, and the crash circumstances (weather, time of day, maximum injury) were compared between BEVs and ICEVs. In Washington, 436,613 crashes were identified, which satisfied the selection criteria, and 3,371 of these crashes (0.77%) involved a BEV. The number of crashes which noted a fire were comparable between BEVs and ICEVs of similar model years (0.3% and 0.33%, respectively), and no differences were discernable for the time of day, weather conditions, road geometry, or other prevailing factors (e.g., run-off-road). However, crashes involving BEVs rose rapidly; 31% of all BEV crashes occurred in just 2021. Results indicate that BEVs are performing comparably to ICEVs, and events surrounding BEV crashes are statistically indistinguishable from ICEV crashes.

Keywords: battery-electric vehicles, transportation safety, infrastructure crashworthiness, run-off-road crashes, ev crash data analysis

Procedia PDF Downloads 88
1738 A Study on Net Profit Associated with Queueing System Subject to Catastrophical Events

Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan

Abstract:

In this paper we study that the catastrophic events arrive independently at the service facility according to a Poisson process with rate λ. The nature of a catastrophic event is that upon its arrival at a service station, it destroys all the customers there waiting and in the service. We will derive the net profit associated with queuing system and obtain its probability of the busy period.

Keywords: queueing system, net-profit, busy period, catastrophical events

Procedia PDF Downloads 363
1737 Climate Change Results in Increased Accessibility of Offshore Wind Farms for Installation and Maintenance

Authors: Victoria Bessonova, Robert Dorrell, Nina Dethlefs, Evdokia Tapoglou, Katharine York

Abstract:

As the global pursuit of renewable energy intensifies, offshore wind farms have emerged as a promising solution to combat climate change. The global offshore wind installed capacity is projected to increase 56-fold by 2055. However, the impacts of climate change, particularly changes in wave climate, are not widely understood. Offshore wind installation and maintenance activities often require specific weather windows, characterized by calm seas and low wave heights, to ensure safe and efficient operations. However, climate change-induced alterations in wave characteristics can reduce the availability of suitable weather windows, leading to delays and disruptions in project timelines. it applied the operational limits of installation and maintenance vessels to past and future climate wave projections. This revealed changes in the annual and monthly accessibility of offshore wind farms at key global development locations. When accessibility is only defined by significant wave height, spatial patterns in the annual accessibility roughly follow changes in significant wave height, with increased availability where significant wave height is decreasing. This resulted in a 1-6% increase in Europe and North America and a similar decrease in South America, Australia and Asia. Monthly changes suggest unchanged or slightly decreased (1-2%) accessibility in summer months and increased (2-6%) in winter. Further assessment includes assessing the sensitivity of accessibility to operational limits defined by wave height combined with wave period and wave height combined with wind speed. Results of this assessment will be included in the presentation. These findings will help stakeholders inform climate change adaptations in installation and maintenance planning practices.

Keywords: climate change, offshore wind, offshore wind installation, operations and maintenance, wave climate, wind farm accessibility

Procedia PDF Downloads 83
1736 Self-Image of Police Officers

Authors: Leo Carlo B. Rondina

Abstract:

Self-image is an important factor to improve the self-esteem of the personnel. The purpose of the study is to determine the self-image of the police. The respondents were the 503 policemen assigned in different Police Station in Davao City, and they were chosen with the used of random sampling. With the used of Exploratory Factor Analysis (EFA), latent construct variables of police image were identified as follows; professionalism, obedience, morality and justice and fairness. Further, ordinal regression indicates statistical characteristics on ages 21-40 which means the age of the respondent statistically improves self-image.

Keywords: police image, exploratory factor analysis, ordinal regression, Galatea effect

Procedia PDF Downloads 287
1735 Thermal Performance of the Extensive Wetland Green Roofs in Winter in Humid Subtropical Climate

Authors: Yi-Yu Huang, Chien-Kuo Wang, Sreerag Chota Veettil, Hang Zhang, Hu Yike

Abstract:

Regarding the pressing issue of reducing energy consumption and carbon footprint of buildings, past research has focused more on analyzing the thermal performance of the extensive terrestrial green roofs with sedum plants in summer. However, the disadvantages of this type of green roof are relatively limited thermal performance, low extreme weather adaptability, relatively higher demands in maintenance, and lower added value in healing landscape. In view of this, this research aims to develop the extensive wetland green roofs with higher thermal performance, high extreme weather adaptability, low demands in maintenance, and high added value in healing landscape, and to measure its thermal performance for buildings in winter. The following factors are considered including the type and mixing formula of growth medium (light weight soil, akadama, creek gravel, pure water) and the type of aquatic plants. The research adopts a four-stage field experiment conducting on the rooftop of a building in a humid subtropical climate. The results found that emergent (Roundleaf rotala), submerged (Ribbon weed), floating-leaved (Water lily) wetland green roofs had similar thermal performance, and superior over wetland green roof without plant, traditional terrestrial green roof (without plant), and pure water green roof (without plant, nighttime only) in terms of overall passive cooling (8.00C) and thermal insulation (4.50C) effects as well as a reduction in heat amplitude (77-85%) in winter in a humid subtropical climate. The thermal performance of the free-floating (Water hyacinth) wetland green roof is inferior to that of the other three types of wetland green roofs, whether in daytime or nighttime.

Keywords: thermal performance, extensive wetland green roof, Aquatic plant, Winter , Humid subtropical climate

Procedia PDF Downloads 179
1734 The Community Structure of Fish and its Correlation with Mangrove Forest Litter Production in Panjang Island, Banten Bay, Indonesia

Authors: Meilisha Putri Pertiwi, Mufti Petala Patria

Abstract:

Mangrove forest often categorized as a productive ecosystem in trophic water and the highest carbon storage among all the forest types. Mangrove-derived organic matter determines the food web of fish and invertebrates. In Indonesia trophic water ecosystem, 80% commersial fish caught in coastal area are high related to food web in mangrove forest ecosystem. Based on the previous research in Panjang Island, Bojonegara, Banten, Indonesia, removed mangrove litterfall to the sea water were 9,023 g/m³/s for two stations (west station–5,169 g/m³/s and north station-3,854 g/m³/s). The vegetation were dominated from Rhizophora apiculata and Rhizopora stylosa. C element is the highest content (27,303% and 30,373%) than N element (0,427% and 0,35%) and P element (0,19% and 0,143%). The aim of research also to know the diversity of fish inhabit in mangrove forest. Fish sampling is by push net. Fish caught are collected into plastics, total length measured, weigh measured, and individual and total counted. Meanwhile, the 3 modified pipes (1 m long, 5 inches diameter, and a closed one hole part facing the river by using a nylon cloth) set in the water channel connecting mangrove forest and sea water for each stasiun. They placed for 1 hour at low tide. Then calculate the speed of water flow and volume of modified pipes. The fish and mangrove litter will be weigh for wet weight, dry weight, and analyze the C, N, and P element content. The sampling data will be conduct 3 times of month in full moon. The salinity, temperature, turbidity, pH, DO, and the sediment of mangrove forest will be measure too. This research will give information about the fish diversity in mangrove forest, the removed mangrove litterfall to the sea water, the composition of sediment, the total element content (C, N, P) of fish and mangrove litter, and the correlation of element content absorption between fish and mangrove litter. The data will be use for the fish and mangrove ecosystem conservation.

Keywords: fish diversity, mangrove forest, mangrove litter, carbon element, nitrogen element, P element, conservation

Procedia PDF Downloads 485
1733 CFD Study for Normal and Rifled Tube with a Convergence Check

Authors: Sharfi Dirar, Shihab Elhaj, Ahmed El Fatih

Abstract:

Computational fluid dynamics were used to simulate and study the heated water boiler tube for both normal and rifled tube with a refinement of the mesh to check the convergence. The operation condition was taken from GARRI power station and used in a boundary condition accordingly. The result indicates the rifled tube has higher heat transfer efficiency than the normal tube.

Keywords: boiler tube, convergence check, normal tube, rifled tube

Procedia PDF Downloads 334
1732 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring

Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover

Abstract:

Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.

Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels

Procedia PDF Downloads 125
1731 Designing an App to Solve Surveying Challenges

Authors: Ali Mohammadi

Abstract:

Forming and equipping the surveyors team for construction projects such as dams, roads, and tunnels is always one of the first challenges and hiring surveyors who are proficient in reading maps and controlling structures, purchasing appropriate surveying equipment that the employer can find Also, using methods that can save time, in the bigger the project, the more these challenges show themselves. Finding a surveyor engineer who can lead the teams and train surveyors of the collection and buy TOTAL STATION according to the company's budget and the surveyors' ability to use them and the time available to each team In the following, we will introduce a surveying app and examine how to use it, which shows how useful it can be for surveyors in projects.

Keywords: DTM CUTFILL, datatransfer, section, tunnel, traverse

Procedia PDF Downloads 81
1730 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents

Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty

Abstract:

A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.

Keywords: abstractive summarization, deep learning, natural language Processing, patent document

Procedia PDF Downloads 123
1729 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer

Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek

Abstract:

Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.

Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying

Procedia PDF Downloads 126
1728 A Proposed Algorithm for Obtaining the Map of Subscribers’ Density Distribution for a Mobile Wireless Communication Network

Authors: C. Temaneh-Nyah, F. A. Phiri, D. Karegeya

Abstract:

This paper presents an algorithm for obtaining the map of subscriber’s density distribution for a mobile wireless communication network based on the actual subscriber's traffic data obtained from the base station. This is useful in statistical characterization of the mobile wireless network.

Keywords: electromagnetic compatibility, statistical analysis, simulation of communication network, subscriber density

Procedia PDF Downloads 309
1727 Predictability of Kiremt Rainfall Variability over the Northern Highlands of Ethiopia on Dekadal and Monthly Time Scales Using Global Sea Surface Temperature

Authors: Kibrom Hadush

Abstract:

Countries like Ethiopia, whose economy is mainly rain-fed dependent agriculture, are highly vulnerable to climate variability and weather extremes. Sub-seasonal (monthly) and dekadal forecasts are hence critical for crop production and water resource management. Therefore, this paper was conducted to study the predictability and variability of Kiremt rainfall over the northern half of Ethiopia on monthly and dekadal time scales in association with global Sea Surface Temperature (SST) at different lag time. Trends in rainfall have been analyzed on annual, seasonal (Kiremt), monthly, and dekadal (June–September) time scales based on rainfall records of 36 meteorological stations distributed across four homogenous zones of the northern half of Ethiopia for the period 1992–2017. The results from the progressive Mann–Kendall trend test and the Sen’s slope method shows that there is no significant trend in the annual, Kiremt, monthly and dekadal rainfall total at most of the station's studies. Moreover, the rainfall in the study area varies spatially and temporally, and the distribution of the rainfall pattern increases from the northeast rift valley to northwest highlands. Methods of analysis include graphical correlation and multiple linear regression model are employed to investigate the association between the global SSTs and Kiremt rainfall over the homogeneous rainfall zones and to predict monthly and dekadal (June-September) rainfall using SST predictors. The results of this study show that in general, SST in the equatorial Pacific Ocean is the main source of the predictive skill of the Kiremt rainfall variability over the northern half of Ethiopia. The regional SSTs in the Atlantic and the Indian Ocean as well contribute to the Kiremt rainfall variability over the study area. Moreover, the result of the correlation analysis showed that the decline of monthly and dekadal Kiremt rainfall over most of the homogeneous zones of the study area are caused by the corresponding persistent warming of the SST in the eastern and central equatorial Pacific Ocean during the period 1992 - 2017. It is also found that the monthly and dekadal Kiremt rainfall over the northern, northwestern highlands and northeastern lowlands of Ethiopia are positively correlated with the SST in the western equatorial Pacific, eastern and tropical northern the Atlantic Ocean. Furthermore, the SSTs in the western equatorial Pacific and Indian Oceans are positively correlated to the Kiremt season rainfall in the northeastern highlands. Overall, the results showed that the prediction models using combined SSTs at various ocean regions (equatorial and tropical) performed reasonably well in the prediction (With R2 ranging from 30% to 65%) of monthly and dekadal rainfall and recommends it can be used for efficient prediction of Kiremt rainfall over the study area to aid with systematic and informed decision making within the agricultural sector.

Keywords: dekadal, Kiremt rainfall, monthly, Northern Ethiopia, sea surface temperature

Procedia PDF Downloads 140
1726 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 274
1725 Ensuring Safe Operation by Providing an End-To-End Field Monitoring and Incident Management Approach for Autonomous Vehicle Based on ML/Dl SW Stack

Authors: Lucas Bublitz, Michael Herdrich

Abstract:

By achieving the first commercialization approval in San Francisco the Autonomous Driving (AD) industry proves the technology maturity of the SAE L4 AD systems and the corresponding software and hardware stack. This milestone reflects the upcoming phase in the industry, where the focus is now about scaling and supervising larger autonomous vehicle (AV) fleets in different operation areas. This requires an operation framework, which organizes and assigns responsibilities to the relevant AV technology and operation stakeholders from the AV system provider, the Remote Intervention Operator, the MaaS provider and regulatory & approval authority. This holistic operation framework consists of technological, processual, and organizational activities to ensure safe operation for fully automated vehicles. Regarding the supervision of large autonomous vehicle fleets, a major focus is on the continuous field monitoring. The field monitoring approach must reflect the safety and security criticality of incidents in the field during driving operation. This includes an automatic containment approach, with the overall goal to avoid safety critical incidents and reduce downtime by a malfunction of the AD software stack. An End-to-end (E2E) field monitoring approach detects critical faults in the field, uses a knowledge-based approach for evaluating the safety criticality and supports the automatic containment of these E/E faults. Applying such an approach will ensure the scalability of AV fleets, which is determined by the handling of incidents in the field and the continuous regulatory compliance of the technology after enhancing the Operational Design Domain (ODD) or the function scope by Functions on Demand (FoD) over the entire digital product lifecycle.

Keywords: field monitoring, incident management, multicompliance management for AI in AD, root cause analysis, database approach

Procedia PDF Downloads 75
1724 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 43
1723 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 205