Search results for: acceleration performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13287

Search results for: acceleration performance

1167 Simulation of Optimum Sculling Angle for Adaptive Rowing

Authors: Pornthep Rachnavy

Abstract:

The purpose of this paper is twofold. First, we believe that there are a significant relationship between sculling angle and sculling style among adaptive rowing. Second, we introduce a methodology used for adaptive rowing, namely simulation, to identify effectiveness of adaptive rowing. For our study we simulate the arms only single scull of adaptive rowing. The method for rowing fastest under the 1000 meter was investigated by study sculling angle using the simulation modeling. A simulation model of a rowing system was developed using the Matlab software package base on equations of motion consist of many variation for moving the boat such as oars length, blade velocity and sculling style. The boat speed, power and energy consumption on the system were compute. This simulation modeling can predict the force acting on the boat. The optimum sculling angle was performing by computer simulation for compute the solution. Input to the model are sculling style of each rower and sculling angle. Outputs of the model are boat velocity at 1000 meter. The present study suggests that the optimum sculling angle exist depends on sculling styles. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the first style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the second style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the third style is -51.57 and 28.65 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the fourth style is -45.84 and 34.38 degree. A theoretical simulation for rowing has been developed and presented. The results suggest that it may be advantageous for the rowers to select the sculling angles proper to sculling styles. The optimum sculling angles of the rower depends on the sculling styles made by each rower. The investigated of this paper can be concludes in three directions: 1;. There is the optimum sculling angle in arms only single scull of adaptive rowing. 2. The optimum sculling angles depend on the sculling styles. 3. Computer simulation of rowing can identify opportunities for improving rowing performance by utilizing the kinematic description of rowing. The freedom to explore alternatives in speed, thrust and timing with the computer simulation will provide the coach with a tool for systematic assessments of rowing technique In addition, the ability to use the computer to examine the very complex movements during rowing will help both the rower and the coach to conceptualize the components of movements that may have been previously unclear or even undefined.

Keywords: simulation, sculling, adaptive, rowing

Procedia PDF Downloads 465
1166 Analyses of Defects in Flexible Silicon Photovoltaic Modules via Thermal Imaging and Electroluminescence

Authors: S. Maleczek, K. Drabczyk, L. Bogdan, A. Iwan

Abstract:

It is known that for industrial applications using solar panel constructed from silicon solar cells require high-efficiency performance. One of the main problems in solar panels is different mechanical and structural defects, causing the decrease of generated power. To analyse defects in solar cells, various techniques are used. However, the thermal imaging is fast and simple method for locating defects. The main goal of this work was to analyze defects in constructed flexible silicon photovoltaic modules via thermal imaging and electroluminescence method. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. Thermal behavior was observed using thermographic camera (VIGOcam v50, VIGO System S.A, Poland) using a DC conventional source. Electroluminescence was observed by Steinbeis Center Photovoltaics (Stuttgart, Germany) equipped with a camera, in which there is a Si-CCD, 16 Mpix detector Kodak KAF-16803type. The camera has a typical spectral response in the range 350 - 1100 nm with a maximum QE of 60 % at 550 nm. In our work commercial silicon solar cells with the size 156 × 156 mm were cut for nine parts (called single solar cells) and used to create photovoltaic modules with the size of 160 × 70 cm (containing about 80 single solar cells). Flexible silicon photovoltaic modules on polyamides or polyester fabric were constructed and investigated taking into consideration anomalies on the surface of modules. Thermal imaging provided evidence of visible voltage-activated conduction. In electro-luminescence images, two regions are noticeable: darker, where solar cell is inactive and brighter corresponding with correctly working photovoltaic cells. The electroluminescence method is non-destructive and gives greater resolution of images thereby allowing a more precise evaluation of microcracks of solar cell after lamination process. Our study showed good correlations between defects observed by thermal imaging and electroluminescence. Finally, we can conclude that the thermographic examination of large scale photovoltaic modules allows us the fast, simple and inexpensive localization of defects at the single solar cells and modules. Moreover, thermographic camera was also useful to detection electrical interconnection between single solar cells.

Keywords: electro-luminescence, flexible devices, silicon solar cells, thermal imaging

Procedia PDF Downloads 316
1165 The School Governing Council as the Impetus for Collaborative Education Governance: A Case Study of Two Benguet Municipalities in the Highlands of Northern Philippines

Authors: Maria Consuelo Doble

Abstract:

For decades, basic public education in the Philippines has been beleaguered by a governance scenario of multi-layered decision-making and the lack of collaboration between sectors in addressing issues on poor access to schools, high dropout rates, low survival rates, and poor student performance. These chronic problems persisted despite multiple efforts making it appear that the education system is incapable of reforming itself. In the mountainous rural towns of La Trinidad and Tuba, in the province of Benguet in Northern Philippines, collaborative education governance was catalyzed by the intervention of Synergeia Foundation, a coalition made up of individuals, institutions and organizations that aim to improve the quality of education in the Philippines. Its major thrust is to empower the major stakeholders at the community level to make education work by building the capacities of School Governing Councils (SGCs). Although mandated by the Department of Education in 2006, the SGCs in Philippine public elementary schools remained dysfunctional. After one year of capacity-building by Synergeia Foundation, some SGCs are already exhibiting active community-based multi-sectoral collaboration, while there are many that are not. With the myriad of factors hindering collaboration, Synergeia Foundation is now confronted with the pressing question: What are the factors that promote collaborative governance in the SGCs so that they can address the education-related issues that they are facing? Using Emerson’s (2011) framework on collaborative governance, this study analyzes the application of collaborative governance by highly-functioning SGCs in the public elementary schools of Tuba and La Trinidad. Findings of this action research indicate how the dynamics of collaboration composed of three interactive and iterative components – principled engagement, shared motivation and capacity for joint action – have resulted in meaningful short-term impact such as stakeholder engagement and decreased a number of dropouts. The change in the behavior of stakeholders is indicative of adaptation to a more collaborative approach in governing education in Benguet highland settings such as Tuba and La Trinidad.

Keywords: basic public education, Benguet highlands, collaborative governance, School Governing Council

Procedia PDF Downloads 292
1164 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications

Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo

Abstract:

Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.

Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer

Procedia PDF Downloads 29
1163 Production, Characterisation and Assessment of Biomixture Fuels for Compression Ignition Engine Application

Authors: K. Masera, A. K. Hossain

Abstract:

Hardly any neat biodiesel satisfies the European EN14214 standard for compression ignition engine application. To satisfy the EN14214 standard, various additives are doped into biodiesel; however, biodiesel additives might cause other problems such as increase in the particular emission and increased specific fuel consumption. In addition, the additives could be expensive. Considering the increasing level of greenhouse gas GHG emissions and fossil fuel depletion, it is forecasted that the use of biodiesel will be higher in the near future. Hence, the negative aspects of the biodiesel additives will likely to gain much more importance and need to be replaced with better solutions. This study aims to satisfy the European standard EN14214 by blending the biodiesels derived from sustainable feedstocks. Waste Cooking Oil (WCO) and Animal Fat Oil (AFO) are two sustainable feedstocks in the EU (including the UK) for producing biodiesels. In the first stage of the study, these oils were transesterified separately and neat biodiesels (W100 & A100) were produced. Secondly, the biodiesels were blended together in various ratios: 80% WCO biodiesel and 20% AFO biodiesel (W80A20), 60% WCO biodiesel and 40% AFO biodiesel (W60A40), 50% WCO biodiesel and 50% AFO biodiesel (W50A50), 30% WCO biodiesel and 70% AFO biodiesel (W30A70), 10% WCO biodiesel and 90% AFO biodiesel (W10A90). The prepared samples were analysed using Thermo Scientific Trace 1300 Gas Chromatograph and ISQ LT Mass Spectrometer (GC-MS). The GS-MS analysis gave Fatty Acid Methyl Ester (FAME) breakdowns of the fuel samples. It was found that total saturation degree of the samples was linearly increasing (from 15% for W100 to 54% for A100) as the percentage of the AFO biodiesel was increased. Furthermore, it was found that WCO biodiesel was mainly (82%) composed of polyunsaturated FAMEs. Cetane numbers, iodine numbers, calorific values, lower heating values and the densities (at 15 oC) of the samples were estimated by using the mass percentages data of the FAMEs. Besides, kinematic viscosities (at 40 °C and 20 °C), densities (at 15 °C), heating values and flash point temperatures of the biomixture samples were measured in the lab. It was found that estimated and measured characterisation results were comparable. The current study concluded that biomixture fuel samples W60A40 and W50A50 were perfectly satisfying the European EN 14214 norms without any need of additives. Investigation on engine performance, exhaust emission and combustion characteristics will be conducted to assess the full feasibility of the proposed biomixture fuels.

Keywords: biodiesel, blending, characterisation, CI engine

Procedia PDF Downloads 166
1162 Effects of Lower and Upper Body Plyometric Training on Electrocardiogram Parameters of University Athletes

Authors: T. N. Uzor, C. O. Akosile, G. O. Emeahara

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: concentric, eccentric, electrocardiogram, plyometric

Procedia PDF Downloads 143
1161 Usability Testing on Information Design through Single-Lens Wearable Device

Authors: Jae-Hyun Choi, Sung-Soo Bae, Sangyoung Yoon, Hong-Ku Yun, Jiyoung Kwahk

Abstract:

This study was conducted to investigate the effect of ocular dominance on recognition performance using a single-lens smart display designed for cycling. A total of 36 bicycle riders who have been cycling consistently were recruited and participated in the experiment. The participants were asked to perform tasks riding a bicycle on a stationary stand for safety reasons. Independent variables of interest include ocular dominance, bike usage, age group, and information layout. Recognition time (i.e., the time required to identify specific information measured with an eye-tracker), error rate (i.e. false answer or failure to identify the information in 5 seconds), and user preference scores were measured and statistical tests were conducted to identify significant results. Recognition time and error ratio showed significant difference by ocular dominance factor, while the preference score did not. Recognition time was faster when the single-lens see-through display on the dominant eye (average 1.12sec) than on the non-dominant eye (average 1.38sec). Error ratio of the information recognition task was significantly lower when the see-through display was worn on the dominant eye (average 4.86%) than on the non-dominant eye (average 14.04%). The interaction effect of ocular dominance and age group was significant with respect to recognition time and error ratio. The recognition time of the users in their 40s was significantly longer than the other age groups when the display was placed on the non-dominant eye, while no difference was observed on the dominant eye. Error ratio also showed the same pattern. Although no difference was observed for the main effect of ocular dominance and bike usage, the interaction effect between the two variables was significant with respect to preference score. Preference score of daily bike users was higher when the display was placed on the dominant eye, whereas participants who use bikes for leisure purposes showed the opposite preference patterns. It was found more effective and efficient to wear a see-through display on the dominant eye than on the non-dominant eye, although user preference was not affected by ocular dominance. It is recommended to wear a see-through display on the dominant eye since it is safer by helping the user recognize the presented information faster and more accurately, even if the user may not notice the difference.

Keywords: eye tracking, information recognition, ocular dominance, smart headware, wearable device

Procedia PDF Downloads 273
1160 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment

Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu

Abstract:

Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.

Keywords: dissolvable magnesium, coating, plasma electrolytic oxide, sealer

Procedia PDF Downloads 111
1159 Characterisation of Human Attitudes in Software Requirements Elicitation

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana

Abstract:

It is evident that there has been progress in the development and innovation of tools, techniques and methods in the development of software. Even so, there are few methodologies that include the human factor from the point of view of motivation, emotions and impact on the work environment; aspects that, when mishandled or not taken into consideration, increase the iterations in the requirements elicitation phase. This generates a broad number of changes in the characteristics of the system during its developmental process and an overinvestment of resources to obtain a final product that, often, does not live up to the expectations and needs of the client. The human factors such as emotions or personality traits are naturally associated with the process of developing software. However, most of these jobs are oriented towards the analysis of the final users of the software and do not take into consideration the emotions and motivations of the members of the development team. Given that in the industry, the strategies to select the requirements engineers and/or the analysts do not take said factors into account, it is important to identify and describe the characteristics or personality traits in order to elicit requirements effectively. This research describes the main personality traits associated with the requirements elicitation tasks through the analysis of the existing literature on the topic and a compilation of our experiences as software development project managers in the academic and productive sectors; allowing for the characterisation of a suitable profile for this job. Moreover, a psychometric test is used as an information gathering technique, and it is applied to the personnel of some local companies in the software development sector. Such information has become an important asset in order to make a comparative analysis between the degree of effectiveness in the way their software development teams are formed and the proposed profile. The results show that of the software development companies studied: 53.58% have selected the personnel for the task of requirements elicitation adequately, 37.71% possess some of the characteristics to perform the task, and 10.71% are inadequate. From the previous information, it is possible to conclude that 46.42% of the requirements engineers selected by the companies could perform other roles more adequately; a change which could improve the performance and competitiveness of the work team and, indirectly, the quality of the product developed. Likewise, the research allowed for the validation of the pertinence and usefulness of the psychometric instrument as well as the accuracy of the characteristics for the profile of requirements engineer proposed as a reference.

Keywords: emotions, human attitudes, personality traits, psychometric tests, requirements engineering

Procedia PDF Downloads 264
1158 Creating an Enabling Learning Environment for Learners with Visual Impairments Inlesotho Rural Schools by Using Asset-Based Approaches

Authors: Mamochana, A. Ramatea, Fumane, P. Khanare

Abstract:

Enabling the learning environment is a significant and adaptive technique necessary to navigate learners’ educational challenges. However, research has indicated that quality provision of education in the environments that are enabling, especially to learners with visual impairments (LVIs, hereafter) in rural schools, remain an ongoing challenge globally. Hence, LVIs often have a lower level of academic performance as compared to their peers. To balance this gap and fulfill learners'fundamentalhuman rights¬ of receiving an equal quality education, appropriate measures and structures that make enabling learning environment a better place to learn must be better understood. This paper, therefore, intends to find possible means that rural schools of Lesotho can employ to make the learning environment for LVIs enabling. The present study aims to determine suitable assets that can be drawn to make the learning environment for LVIs enabling. The study is also informed by the transformative paradigm and situated within a qualitative research approach. Data were generated through focus group discussions with twelve teachers who were purposefully selected from two rural primary schools in Lesotho. The generated data were then analyzed thematically using Braun and Clarke's six-phase framework. The findings of the study indicated that participating teachers do have an understanding that rural schools boast of assets (existing and hidden) that have a positive influence in responding to the special educational needs of LVIs. However, the participants also admitted that although their schools boast of assets, they still experience limited knowledge about the use of the existing assets and thus, realized a need for improved collaboration, involvement of the existing assets, and enhancement of academic resources to make LVIs’ learning environment enabling. The findings of this study highlight the significance of the effective use of assets. Additionally, coincides with literature that shows recognizing and tapping into the existing assets enable learning for LVIs. In conclusion, the participants in the current study indicated that for LVIs’ learning environment to be enabling, there has to be sufficient use of the existing assets. The researchers, therefore, recommend that the appropriate use of assets is good, but may not be sufficient if the existing assets are not adequately managed. Hence,VILs experience a vicious cycle of vulnerability. It was thus, recommended that adequate use of assets and teachers' engagement as active assets should always be considered to make the learning environment a better place for LVIs to learan in the future

Keywords: assets, enabling learning environment, rural schools, learners with visual impairments

Procedia PDF Downloads 108
1157 Determination of the Effective Economic and/or Demographic Indicators in Classification of European Union Member and Candidate Countries Using Partial Least Squares Discriminant Analysis

Authors: Esra Polat

Abstract:

Partial Least Squares Discriminant Analysis (PLSDA) is a statistical method for classification and consists a classical Partial Least Squares Regression (PLSR) in which the dependent variable is a categorical one expressing the class membership of each observation. PLSDA can be applied in many cases when classical discriminant analysis cannot be applied. For example, when the number of observations is low and when the number of independent variables is high. When there are missing values, PLSDA can be applied on the data that is available. Finally, it is adapted when multicollinearity between independent variables is high. The aim of this study is to determine the economic and/or demographic indicators, which are effective in grouping the 28 European Union (EU) member countries and 7 candidate countries (including potential candidates Bosnia and Herzegovina (BiH) and Kosova) by using the data set obtained from database of the World Bank for 2014. Leaving the political issues aside, the analysis is only concerned with the economic and demographic variables that have the potential influence on country’s eligibility for EU entrance. Hence, in this study, both the performance of PLSDA method in classifying the countries correctly to their pre-defined groups (candidate or member) and the differences between the EU countries and candidate countries in terms of these indicators are analyzed. As a result of the PLSDA, the value of percentage correctness of 100 % indicates that overall of the 35 countries is classified correctly. Moreover, the most important variables that determine the statuses of member and candidate countries in terms of economic indicators are identified as 'external balance on goods and services (% GDP)', 'gross domestic savings (% GDP)' and 'gross national expenditure (% GDP)' that means for the 2014 economical structure of countries is the most important determinant of EU membership. Subsequently, the model validated to prove the predictive ability by using the data set for 2015. For prediction sample, %97,14 of the countries are correctly classified. An interesting result is obtained for only BiH, which is still a potential candidate for EU, predicted as a member of EU by using the indicators data set for 2015 as a prediction sample. Although BiH has made a significant transformation from a war-torn country to a semi-functional state, ethnic tensions, nationalistic rhetoric and political disagreements are still evident, which inhibit Bosnian progress towards the EU.

Keywords: classification, demographic indicators, economic indicators, European Union, partial least squares discriminant analysis

Procedia PDF Downloads 281
1156 Determinants of Probability Weighting and Probability Neglect: An Experimental Study of the Role of Emotions, Risk Perception, and Personality in Flood Insurance Demand

Authors: Peter J. Robinson, W. J. Wouter Botzen

Abstract:

Individuals often over-weight low probabilities and under-weight moderate to high probabilities, however very low probabilities are either significantly over-weighted or neglected. Little is known about factors affecting probability weighting in Prospect Theory related to emotions specific to risk (anticipatory and anticipated emotions), the threshold of concern, as well as personality traits like locus of control. This study provides these insights by examining factors that influence probability weighting in the context of flood insurance demand in an economic experiment. In particular, we focus on determinants of flood probability neglect to provide recommendations for improved risk management. In addition, results obtained using real incentives and no performance-based payments are compared in the experiment with high experimental outcomes. Based on data collected from 1’041 Dutch homeowners, we find that: flood probability neglect is related to anticipated regret, worry and the threshold of concern. Moreover, locus of control and regret affect probabilistic pessimism. Nevertheless, we do not observe strong evidence that incentives influence flood probability neglect nor probability weighting. The results show that low, moderate and high flood probabilities are under-weighted, which is related to framing in the flooding context and the degree of realism respondents attach to high probability property damages. We suggest several policies to overcome psychological factors related to under-weighting flood probabilities to improve flood preparations. These include policies that promote better risk communication to enhance insurance decisions for individuals with a high threshold of concern, and education and information provision to change the behaviour of internal locus of control types as well as people who see insurance as an investment. Multi-year flood insurance may also prevent short-sighted behaviour of people who have a tendency to regret paying for insurance. Moreover, bundling low-probability/high-impact risks with more immediate risks may achieve an overall covered risk which is less likely to be judged as falling below thresholds of concern. These measures could aid the development of a flood insurance market in the Netherlands for which we find to be demand.

Keywords: flood insurance demand, prospect theory, risk perceptions, risk preferences

Procedia PDF Downloads 276
1155 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns

Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis

Abstract:

The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.

Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy

Procedia PDF Downloads 164
1154 Impact of Electric Vehicles on Energy Consumption and Environment

Authors: Amela Ajanovic, Reinhard Haas

Abstract:

Electric vehicles (EVs) are considered as an important means to cope with current environmental problems in transport. However, their high capital costs and limited driving ranges state major barriers to a broader market penetration. The core objective of this paper is to investigate the future market prospects of various types of EVs from an economic and ecological point of view. Our method of approach is based on the calculation of total cost of ownership of EVs in comparison to conventional cars and a life-cycle approach to assess the environmental benignity. The most crucial parameters in this context are km driven per year, depreciation time of the car and interest rate. The analysis of future prospects it is based on technological learning regarding investment costs of batteries. The major results are the major disadvantages of battery electric vehicles (BEVs) are the high capital costs, mainly due to the battery, and a low driving range in comparison to conventional vehicles. These problems could be reduced with plug-in hybrids (PHEV) and range extenders (REXs). However, these technologies have lower CO₂ emissions in the whole energy supply chain than conventional vehicles, but unlike BEV they are not zero-emission vehicles at the point of use. The number of km driven has a higher impact on total mobility costs than the learning rate. Hence, the use of EVs as taxis and in car-sharing leads to the best economic performance. The most popular EVs are currently full hybrid EVs. They have only slightly higher costs and similar operating ranges as conventional vehicles. But since they are dependent on fossil fuels, they can only be seen as energy efficiency measure. However, they can serve as a bridging technology, as long as BEVs and fuel cell vehicle do not gain high popularity, and together with PHEVs and REX contribute to faster technological learning and reduction in battery costs. Regarding the promotion of EVs, the best results could be reached with a combination of monetary and non-monetary incentives, as in Norway for example. The major conclusion is that to harvest the full environmental benefits of EVs a very important aspect is the introduction of CO₂-based fuel taxes. This should ensure that the electricity for EVs is generated from renewable energy sources; otherwise, total CO₂ emissions are likely higher than those of conventional cars.

Keywords: costs, mobility, policy, sustainability,

Procedia PDF Downloads 226
1153 Fabrication and Characteristics of Ni Doped Titania Nanotubes by Electrochemical Anodization

Authors: J. Tirano, H. Zea, C. Luhrs

Abstract:

It is well known that titanium dioxide is a semiconductor with several applications in photocatalytic process. Its band gap makes it very interesting in the photoelectrodes manufacturing used in photoelectrochemical cells for hydrogen production, a clean and environmentally friendly fuel. The synthesis of 1D titanium dioxide nanostructures, such as nanotubes, makes possible to produce more efficient photoelectrodes for solar energy to hydrogen conversion. In essence, this is because it increases the charge transport rate, decreasing recombination options. However, its principal constraint is to be mainly sensitive to UV range, which represents a very low percentage of solar radiation that reaches earth's surface. One of the alternatives to modifying the TiO2’s band gap and improving its photoactivity under visible light irradiation is to dope the nanotubes with transition metals. This option requires fabricating efficient nanostructured photoelectrodes with controlled morphology and specific properties able to offer a suitable surface area for metallic doping. Hence, currently one of the central challenges in photoelectrochemical cells is the construction of nanomaterials with a proper band position for driving the reaction while absorbing energy over the VIS spectrum. This research focuses on the synthesis and characterization of Nidoped TiO2 nanotubes for improving its photocatalytic activity in solar energy conversion applications. Initially, titanium dioxide nanotubes (TNTs) with controlled morphology were synthesized by two-step potentiostatic anodization of titanium foil. The anodization was carried out at room temperature in an electrolyte composed of ammonium fluoride, deionized water and ethylene glycol. Consequent thermal annealing of as-prepared TNTs was conducted in the air between 450 °C - 550 °C. Afterwards, the nanotubes were superficially modified by nickel deposition. Morphology and crystalline phase of the samples were carried out by SEM, EDS and XRD analysis before and after nickel deposition. Determining the photoelectrochemical performance of photoelectrodes is based on typical electrochemical characterization techniques. Also, the morphological characterization associated electrochemical behavior analysis were discussed to establish the effect of nickel nanoparticles modification on the TiO2 nanotubes. The methodology proposed in this research allows using other transition metal for nanotube surface modification.

Keywords: dimensionally stable electrode, nickel nanoparticles, photo-electrode, TiO₂ nanotubes

Procedia PDF Downloads 178
1152 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering

Authors: R. Nandhini, Gaurab Mudbhari

Abstract:

Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.

Keywords: machine learning, deep learning, image classification, image clustering

Procedia PDF Downloads 17
1151 The Levels of Neurosteroid 7β-Hydroxy-Epiandrosterone in Men and Pregnant Women

Authors: J. Vitku, L. Kolatorova, T. Chlupacova, J. Heracek, M. Hill, M. Duskova, L. Starka

Abstract:

Background: 7β-hydroxy-epiandrosterone (7β–OH-EpiA) is an endogenous steroid, that has been shown to exert neuroprotective and anti-inflammatory effects in vitro as well as in animal models. However, to the best of our knowledge no information is available about concentration of this androgen metabolite in human population. The aim of the study was to measure and compare levels of 7β–OH-EpiA in men and pregnant women in different biological fluids and evaluate the relationship between 7β–OH-EpiA in men and their sperm quality. Methods: First, a sensitive isotope dilution high performance liquid chromatography-mass spectrometry method for measurement of 7β–OH-EpiA in different biological fluids was developed. Validation of the method met the requirements of FDA guidelines. Afterwards 7β–OH-EpiA in plasma and seminal plasma of 191 men with different degree of infertility (healthy men, lightly infertile men, moderately infertile men, severely infertile men) was analysed. Furthermore, the levels of 7β–OH-EpiA in plasma of 34 pregnant women in 37th week of gestation and corresponding cord plasma that reflects steroid levels in the fetus were measured. Results: Concentrations of 7β–OH-EpiA in seminal plasma were significantly higher in severely infertile men in comparison with healthy men and lightly infertile men. The same trend was observed when blood plasma was evaluated. Furthermore, plasmatic 7β –OH-EpiA negatively correlated with concentration (-0.215; p < 0.01) and total count (-0.15; p < 0.05). Seminal 7β–OH-EpiA was negatively associated with motility (-0.26; p < 0.01), progressively motile sperms (-0.233; p < 0.01) and nonprogressively motile sperms (-0.188; p < 0.05). Plasmatic 7β –OH-EpiA levels in men were generally higher in comparison with pregnant women. Levels 7β–OH-EpiA were under the lower limit of quantification (LLOQ) in majority of samples of pregnant women and cord plasma. Only 4 plasma samples of pregnant women and 7 cord blood plasma samples were above LLOQ and where in range of units of pg/ml. Conclusion: Based on available information, this is the first study measuring 7β–OH-EpiA in human samples. 7β–OH-EpiA is associated with lower sperm quality and certainly it is worth to explore its role in this field thoroughly. Interestingly, levels of 7β–OH-EpiA in pregnant women were extremely low despite the fact that steroid levels including androgens are generally higher during pregnancy. Acknowledgements: This work was supported by the project MH CR 17-30528 A from the Czech Health Research Council, MH CZ - DRO (Institute of Endocrinology - EU, 00023761) and by the MEYS CR (OP RDE, Excellent research - ENDO.CZ).

Keywords: 7β-hydroxy-epiandrosterone, steroid, sperm quality, pregnancy

Procedia PDF Downloads 256
1150 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 169
1149 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow

Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite

Abstract:

The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.

Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms

Procedia PDF Downloads 420
1148 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers

Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang

Abstract:

Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.

Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors

Procedia PDF Downloads 122
1147 Factors Determining the Vulnerability to Occupational Health Risk and Safety of Call Center Agents in the Philippines

Authors: Lito M. Amit, Venecio U. Ultra, Young-Woong Song

Abstract:

The business process outsourcing (BPO) in the Philippines is expanding rapidly attracting more than 2% of total employment. Currently, the BPO industry is confronted with several issues pertaining to sustainable productivity such as meeting the staffing gap, high rate of employees’ turnover and workforce retention, and the occupational health and safety (OHS) of call center agents. We conducted a survey of OHS programs and health concerns among call center agents in the Philippines and determined the sociocultural factors that affect the vulnerability of call center agents to occupational health risks and hazards. The majority of the agents affirmed that OHS are implemented and OHS orientation and emergency procedures were conducted at employment initiations, perceived favorable and convenient working environment except for occasional noise disturbances and acoustic shock, visual, and voice fatigues. Male agents can easily adjust to the demands and changes in their work environment and flexible work schedules than female agents. Female agents have a higher tendency to be pressured and humiliated by low work performance, experience a higher incidence of emotional abuse, psychological abuse, and experience more physical stress than male agents. The majority of the call center agents had a night-shift schedule and regardless of other factors, night shift work brings higher stress to agents. While working in a call center, higher incidence of headaches and insomnia, burnout, suppressed anger, anxiety, and depressions were experienced by female, younger (21-25 years old) and those at night shift than their counterpart. Most common musculoskeletal disorders include body pain in the neck, shoulders and back; and hand and wrist disorders and these are commonly experienced by female and younger workers. About 30% experienced symptoms of cardiovascular and gastrointestinal disorders and weakened immune systems. Overall, these findings have shown the variable vulnerability by a different subpopulation of call center agents and are important in the occupational health risk prevention and management towards a sustainable human resource for BPO industry in the Philippines.

Keywords: business process outsourcing industry, health risk of call center agents, socio-cultural determinants, Philippines

Procedia PDF Downloads 494
1146 Spatial Suitability Assessment of Onshore Wind Systems Using the Analytic Hierarchy Process

Authors: Ayat-Allah Bouramdane

Abstract:

Since 2010, there have been sustained decreases in the unit costs of onshore wind energy and large increases in its deployment, varying widely across regions. In fact, the onshore wind production is affected by air density— because cold air is more dense and therefore more effective at producing wind power— and by wind speed—as wind turbines cannot operate in very low or extreme stormy winds. The wind speed is essentially affected by the surface friction or the roughness and other topographic features of the land, which slow down winds significantly over the continent. Hence, the identification of the most appropriate locations of onshore wind systems is crucial to maximize their energy output and therefore minimize their Levelized Cost of Electricity (LCOE). This study focuses on the preliminary assessment of onshore wind energy potential, in several areas in Morocco with a particular focus on the Dakhla city, by analyzing the diurnal and seasonal variability of wind speed for different hub heights, the frequency distribution of wind speed, the wind rose and the wind performance indicators such as wind power density, capacity factor, and LCOE. In addition to climate criterion, other criteria (i.e., topography, location, environment) were selected fromGeographic Referenced Information (GRI), reflecting different considerations. The impact of each criterion on the suitability map of onshore wind farms was identified using the Analytic Hierarchy Process (AHP). We find that the majority of suitable zones are located along the Atlantic Ocean and the Mediterranean Sea. We discuss the sensitivity of the onshore wind site suitability to different aspects such as the methodology—by comparing the Multi-Criteria Decision-Making (MCDM)-AHP results to the Mean-Variance Portfolio optimization framework—and the potential impact of climate change on this suitability map, and provide the final recommendations to the Moroccan energy strategy by analyzing if the actual Morocco's onshore wind installations are located within areas deemed suitable. This analysis may serve as a decision-making framework for cost-effective investment in onshore wind power in Morocco and to shape the future sustainable development of the Dakhla city.

Keywords: analytic hierarchy process (ahp), dakhla, geographic referenced information, morocco, multi-criteria decision-making, onshore wind, site suitability.

Procedia PDF Downloads 172
1145 On the Other Side of Shining Mercury: In Silico Prediction of Cold Stabilizing Mutations in Serine Endopeptidase from Bacillus lentus

Authors: Debamitra Chakravorty, Pratap K. Parida

Abstract:

Cold-adapted proteases enhance wash performance in low-temperature laundry resulting in a reduction in energy consumption and wear of textiles and are also used in the dehairing process in leather industries. Unfortunately, the possible drawbacks of using cold-adapted proteases are their instability at higher temperatures. Therefore, proteases with broad temperature stability are required. Unfortunately, wild-type cold-adapted proteases exhibit instability at higher temperatures and thus have low shelf lives. Therefore, attempts to engineer cold-adapted proteases by protein engineering were made previously by directed evolution and random mutagenesis. The lacuna is the time, capital, and labour involved to obtain these variants are very demanding and challenging. Therefore, rational engineering for cold stability without compromising an enzyme's optimum pH and temperature for activity is the current requirement. In this work, mutations were rationally designed with the aid of high throughput computational methodology of network analysis, evolutionary conservation scores, and molecular dynamics simulations for Savinase from Bacillus lentus with the intention of rendering the mutants cold stable without affecting their temperature and pH optimum for activity. Further, an attempt was made to incorporate a mutation in the most stable mutant rationally obtained by this method to introduce oxidative stability in the mutant. Such enzymes are desired in detergents with bleaching agents. In silico analysis by performing 300 ns molecular dynamics simulations at 5 different temperatures revealed that these three mutants were found to be better in cold stability compared to the wild type Savinase from Bacillus lentus. Conclusively, this work shows that cold adaptation without losing optimum temperature and pH stability and additionally stability from oxidative damage can be rationally designed by in silico enzyme engineering. The key findings of this work were first, the in silico data of H5 (cold stable savinase) used as a control in this work, corroborated with its reported wet lab temperature stability data. Secondly, three cold stable mutants of Savinase from Bacillus lentus were rationally identified. Lastly, a mutation which will stabilize savinase against oxidative damage was additionally identified.

Keywords: cold stability, molecular dynamics simulations, protein engineering, rational design

Procedia PDF Downloads 141
1144 Process Modeling in an Aeronautics Context

Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds

Abstract:

Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.

Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE

Procedia PDF Downloads 163
1143 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 170
1142 Fostering Creativity in Education Exploring Leadership Perspectives on Systemic Barriers to Innovative Pedagogy

Authors: David Crighton, Kelly Smith

Abstract:

The ability to adopt creative pedagogical approaches is increasingly vital in today’s educational landscape. This study examines the institutional barriers that hinder educators, in the UK, from embracing such innovation, focusing specifically on the experiences and perspectives of educational leaders. Current literature primarily focuses on the challenges that academics and teachers encounter, particularly highlighting how management culture and audit processes negatively affect their ability to be creative in classrooms and lecture theatres. However, this focus leaves a gap in understanding management perspectives, which is crucial for providing a more holistic insight into the challenges encountered in educational settings. To explore this gap, we are conducting semi-structured interviews with senior leaders across various educational contexts, including universities, schools, and further education colleges. This qualitative methodology, combined with thematic analysis, aims to uncover the managerial, financial, and administrative pressures these leaders face in fostering creativity in teaching and supporting professional learning opportunities. Preliminary insights indicate that educational leaders face significant barriers, such as institutional policies, resource limitations, and external performance indicators. These challenges create a restrictive environment that stifles educators' creativity and innovation. Addressing these barriers is essential for empowering staff to adopt more creative pedagogical approaches, ultimately enhancing student engagement and learning outcomes. By alleviating these constraints, educational leaders can cultivate a culture that fosters creativity and flexibility in the classroom. These insights will inform practical recommendations to support institutional change and enhance professional learning opportunities, contributing to a more dynamic educational environment. In conclusion, this study offers a timely exploration of how leadership can influence the pedagogical landscape in a rapidly evolving educational context. The research seeks to highlight the crucial role that educational leaders play in shaping a culture of creativity and adaptability, ensuring that institutions are better equipped to respond to the challenges of contemporary education.

Keywords: educational leadership, professional learning, creative pedagogy, marketisation

Procedia PDF Downloads 16
1141 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings

Authors: Chen Wang, Jared Evans, Yan Asmann

Abstract:

With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.

Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing

Procedia PDF Downloads 257
1140 Mesoporous Titania Thin Films for Gentamicin Delivery and Bone Morphogenetic Protein-2 Immobilization

Authors: Ane Escobar, Paula Angelomé, Mihaela Delcea, Marek Grzelczak, Sergio Enrique Moya

Abstract:

The antibacterial capacity of bone-anchoring implants can be improved by the use of antibiotics that can be delivered to the media after the surgery. Mesoporous films have shown great potential in drug delivery for orthopedic applications, since pore size and thickness can be tuned to produce different surface area and free volume inside the material. This work shows the synthesis of mesoporous titania films (MTF) by sol-gel chemistry and evaporation-induced self-assembly (EISA) on top of glass substrates. Pores with a diameter of 12nm were observed by Transmission Electron Microscopy (TEM). A film thickness of 100 nm was measured by Scanning Electron Microscopy (SEM). Gentamicin was used to study the antibiotic delivery from the film by means of High-performance liquid chromatography (HPLC). The Staphilococcus aureus strand was used to evaluate the effectiveness of the penicillin loaded films toward inhibiting bacterial colonization. MC3T3-E1 pre-osteoblast cell proliferation experiments proved that MTFs have a good biocompatibility and are a suitable surface for MC3T3-E1 cell proliferation. Moreover, images taken by Confocal Fluorescence Microscopy using labeled vinculin, showed good adhesion of the MC3T3-E1 cells to the MTFs, as well as complex actin filaments arrangement. In order to improve cell proliferation Bone Morphogenetic Protein-2 (BMP-2) was adsorbed on top of the mesoporous film. The deposition of the protein was proved by measurements in the contact angle, showing an increment in the hydrophobicity while the protein concentration is higher. By measuring the dehydrogenase activity in MC3T3-E1 cells cultured in dually functionalized mesoporous titatina films with gentamicin and BMP-2 is possible to find an improvement in cell proliferation. For this purpose, the absorption of a yellow-color formazan dye, product of a water-soluble salt (WST-8) reduction by the dehydrogenases, is measured. In summary, this study proves that by means of the surface modification of MTFs with proteins and loading of gentamicin is possible to achieve an antibacterial effect and a cell growth improvement.

Keywords: antibacterial, biocompatibility, bone morphogenetic protein-2, cell proliferation, gentamicin, implants, mesoporous titania films, osteoblasts

Procedia PDF Downloads 166
1139 Multi-Criteria Decision Making Tool for Assessment of Biorefinery Strategies

Authors: Marzouk Benali, Jawad Jeaidi, Behrang Mansoornejad, Olumoye Ajao, Banafsheh Gilani, Nima Ghavidel Mehr

Abstract:

Canadian forest industry is seeking to identify and implement transformational strategies for enhanced financial performance through the emerging bioeconomy or more specifically through the concept of the biorefinery. For example, processing forest residues or surplus of biomass available on the mill sites for the production of biofuels, biochemicals and/or biomaterials is one of the attractive strategies along with traditional wood and paper products and cogenerated energy. There are many possible process-product biorefinery pathways, each associated with specific product portfolios with different levels of risk. Thus, it is not obvious which unique strategy forest industry should select and implement. Therefore, there is a need for analytical and design tools that enable evaluating biorefinery strategies based on a set of criteria considering a perspective of sustainability over the short and long terms, while selecting the existing core products as well as selecting the new product portfolio. In addition, it is critical to assess the manufacturing flexibility to internalize the risk from market price volatility of each targeted bio-based product in the product portfolio, prior to invest heavily in any biorefinery strategy. The proposed paper will focus on introducing a systematic methodology for designing integrated biorefineries using process systems engineering tools as well as a multi-criteria decision making framework to put forward the most effective biorefinery strategies that fulfill the needs of the forest industry. Topics to be covered will include market analysis, techno-economic assessment, cost accounting, energy integration analysis, life cycle assessment and supply chain analysis. This will be followed by describing the vision as well as the key features and functionalities of the I-BIOREF software platform, developed by CanmetENERGY of Natural Resources Canada. Two industrial case studies will be presented to support the robustness and flexibility of I-BIOREF software platform: i) An integrated Canadian Kraft pulp mill with lignin recovery process (namely, LignoBoost™); ii) A standalone biorefinery based on ethanol-organosolv process.

Keywords: biorefinery strategies, bioproducts, co-production, multi-criteria decision making, tool

Procedia PDF Downloads 232
1138 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 177