Search results for: optimal transport
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4876

Search results for: optimal transport

526 Geographic Variation in the Baseline Susceptibility of Helicoverpa armigera (Hubner) (Noctuidae: Lepidoptera) Field Populations to Bacillus thuringiensis Cry Toxins for Resistance Monitoring

Authors: Muhammad Arshad, M. Sufian, Muhammad D. Gogi, A. Aslam

Abstract:

The transgenic cotton expressing Bacillus thuringiensis (Bt) provides an effective control of Helicoverpa armigera, a most damaging pest of the cotton crop. However, Bt cotton may not be the optimal solution owing to the selection pressure of Cry toxins. As Bt cotton express the insecticidal proteins throughout the growing seasons, there are the chances of resistance development in the target pests. A regular monitoring and surveillance of target pest’s baseline susceptibility to Bt Cry toxins is crucial for early detection of any resistance development. The present study was conducted to monitor the changes in the baseline susceptibility of the field population of H. armigera to Bt Cry1Ac toxin. The field-collected larval populations were maintained in the laboratory on artificial diet and F1 generation larvae were used for diet incorporated diagnostic studies. The LC₅₀ and MIC₅₀ were calculated to measure the level of resistance of population as a ratio over susceptible population. The monitoring results indicated a significant difference in the susceptibility (LC₅₀) of H. armigera for first, second, third and fourth instar larval populations sampled from different cotton growing areas over the study period 2016-17. The variations in susceptibility among the tested insects depended on the age of the insect and susceptibility decreased with the age of larvae. The overall results show that the average resistant ratio (RR) of all field-collected populations (FSD, SWL, MLT, BWP and DGK) exposed to Bt toxin Cry1Ac ranged from 3.381-fold to 7.381-fold for 1st instar, 2.370-fold to 3.739-fold for 2nd instar, 1.115-fold to 1.762-fold for 3rd instar and 1.141-fold to 2.504-fold for 4th instar, depicting maximum RR from MLT population, whereas minimum RR for FSD and SWL population. The results regarding moult inhibitory concentration of H. armigera larvae (1-4th instars) exposed to different concentrations of Bt Cry1Ac toxin indicated that among all field populations, overall Multan (MLT) and Bahawalpur (BWP) populations showed higher MIC₅₀ values as compared to Faisalabad (FSD) and Sahiwal (SWL), whereas DG Khan (DGK) population showed an intermediate moult inhibitory concentrations. This information is important for the development of more effective resistance monitoring programs. The development of Bt Cry toxins baseline susceptibility data before the widespread commercial release of transgenic Bt cotton cultivars in Pakistan is important for the development of more effective resistance monitoring programs to identify the resistant H. armigera populations.

Keywords: Bt cotton, baseline, Cry1Ac toxins, H. armigera

Procedia PDF Downloads 141
525 Exploring the Synergistic Effects of Aerobic Exercise and Cinnamon Extract on Metabolic Markers in Insulin-Resistant Rats through Advanced Machine Learning and Deep Learning Techniques

Authors: Masoomeh Alsadat Mirshafaei

Abstract:

The present study aims to explore the effect of an 8-week aerobic training regimen combined with cinnamon extract on serum irisin and leptin levels in insulin-resistant rats. Additionally, this research leverages various machine learning (ML) and deep learning (DL) algorithms to model the complex interdependencies between exercise, nutrition, and metabolic markers, offering a groundbreaking approach to obesity and diabetes research. Forty-eight Wistar rats were selected and randomly divided into four groups: control, training, cinnamon, and training cinnamon. The training protocol was conducted over 8 weeks, with sessions 5 days a week at 75-80% VO2 max. The cinnamon and training-cinnamon groups were injected with 200 ml/kg/day of cinnamon extract. Data analysis included serum data, dietary intake, exercise intensity, and metabolic response variables, with blood samples collected 72 hours after the final training session. The dataset was analyzed using one-way ANOVA (P<0.05) and fed into various ML and DL models, including Support Vector Machines (SVM), Random Forest (RF), and Convolutional Neural Networks (CNN). Traditional statistical methods indicated that aerobic training, with and without cinnamon extract, significantly increased serum irisin and decreased leptin levels. Among the algorithms, the CNN model provided superior performance in identifying specific interactions between cinnamon extract concentration and exercise intensity, optimizing the increase in irisin and the decrease in leptin. The CNN model achieved an accuracy of 92%, outperforming the SVM (85%) and RF (88%) models in predicting the optimal conditions for metabolic marker improvements. The study demonstrated that advanced ML and DL techniques could uncover nuanced relationships and potential cellular responses to exercise and dietary supplements, which is not evident through traditional methods. These findings advocate for the integration of advanced analytical techniques in nutritional science and exercise physiology, paving the way for personalized health interventions in managing obesity and diabetes.

Keywords: aerobic training, cinnamon extract, insulin resistance, irisin, leptin, convolutional neural networks, exercise physiology, support vector machines, random forest

Procedia PDF Downloads 37
524 Demarcating Wetting States in Pressure-Driven Flows by Poiseuille Number

Authors: Anvesh Gaddam, Amit Agrawal, Suhas Joshi, Mark Thompson

Abstract:

An increase in surface area to volume ratio with a decrease in characteristic length scale, leads to a rapid increase in pressure drop across the microchannel. Texturing the microchannel surfaces reduce the effective surface area, thereby decreasing the pressured drop. Surface texturing introduces two wetting states: a metastable Cassie-Baxter state and stable Wenzel state. Predicting wetting transition in textured microchannels is essential for identifying optimal parameters leading to maximum drag reduction. Optical methods allow visualization only in confined areas, therefore, obtaining whole-field information on wetting transition is challenging. In this work, we propose a non-invasive method to capture wetting transitions in textured microchannels under flow conditions. To this end, we tracked the behavior of the Poiseuille number Po = f.Re, (with f the friction factor and Re the Reynolds number), for a range of flow rates (5 < Re < 50), and different wetting states were qualitatively demarcated by observing the inflection points in the f.Re curve. Microchannels with both longitudinal and transverse ribs with a fixed gas fraction (δ, a ratio of shear-free area to total area) and at a different confinement ratios (ε, a ratio of rib height to channel height) were fabricated. The measured pressure drop values for all the flow rates across the textured microchannels were converted into Poiseuille number. Transient behavior of the pressure drop across the textured microchannels revealed the collapse of liquid-gas interface into the gas cavities. Three wetting states were observed at ε = 0.65 for both longitudinal and transverse ribs, whereas, an early transition occurred at Re ~ 35 for longitudinal ribs at ε = 0.5, due to spontaneous flooding of the gas cavities as the liquid-gas interface ruptured at the inlet. In addition, the pressure drop in the Wenzel state was found to be less than the Cassie-Baxter state. Three-dimensional numerical simulations confirmed the initiation of the completely wetted Wenzel state in the textured microchannels. Furthermore, laser confocal microscopy was employed to identify the location of the liquid-gas interface in the Cassie-Baxter state. In conclusion, the present method can overcome the limitations posed by existing techniques, to conveniently capture wetting transition in textured microchannels.

Keywords: drag reduction, Poiseuille number, textured surfaces, wetting transition

Procedia PDF Downloads 161
523 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids

Authors: Ayalew Yimam Ali

Abstract:

The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.

Procedia PDF Downloads 20
522 Knowledge about Dementia: Why Should Family Caregivers Know that Dementia is a Terminal Disease?

Authors: Elzbieta Sikorska-Simmons

Abstract:

Dementia is a progressive terminal disease. Despite this recognition, research shows that most family caregivers do not know it, and it is unclear how this knowledge affects the quality of patient care. The aim of this qualitative study of 20 family caregivers for patients with advanced dementia is to examine how the caregiver's knowledge about dementia affects the quality of patient care in the context of healthcare decision-making, advanced care planning, and access to adequate support systems. Knowledge about dementia implies family caregivers' understanding of dementia trajectories, common symptoms/complications, and alternative treatment options (e.g., comfort feeding versus tube feeding). Data were collected in semi-structured interviews with 20 family caregivers. The interviews were conducted in person by the author and designed to elicit rich descriptions of family caregivers' experiences with healthcare decision-making and the management of common symptoms/complications of end-stage dementia as patient healthcare proxies. The study findings suggest that caregivers who recognize that dementia is a terminal disease are less likely to opt for life-extending treatments during the advanced stages. They are also more likely to seek palliative/hospice care, and consequently, they are better able to avoid unnecessary hospitalizations or medical procedures. For example, those who know that dementia is a terminal disease tend to opt for "comfort feeding" rather than "tube feeding" in managing the swallowing difficulties that accompany advanced dementia. In the context of advance care planning, family caregivers who know that dementia is a terminal disease tend to have more meaningful advance directives (e.g., Power of Attorney and Do Not Resuscitate orders). They are better prepared to anticipate common problems and pursue treatments that foster the best quality of patient life and care. Greater knowledge about advanced dementia helps them make more informed decisions that focus on enhancing the quality of patient life rather than just survival. In addition, those who know that dementia is a terminal disease are more likely to establish adequate support systems to help them cope with the complex demands of caregiving. For example, they are more likely to seek dementia-oriented primary care programs that offer house visits or respite services. Based on the study findings, knowledge about dementia as a terminal disease is critical in the optimal management of patient care needs and the establishment of adequate support systems. More research is needed to better understand what caregivers need to know to better prepare them for the complex demands of dementia caregiving.

Keywords: dementia education, family caregiver, management of dementia, quality of care

Procedia PDF Downloads 100
521 Extraction of Scandium (Sc) from an Ore with Functionalized Nanoporous Silicon Adsorbent

Authors: Arezoo Rahmani, Rinez Thapa, Juha-Matti Aalto, Petri Turhanen, Jouko Vepsalainen, Vesa-PekkaLehto, Joakim Riikonen

Abstract:

Production of Scandium (Sc) is a complicated process because Sc is found only in low concentrations in ores and the concentration of Sc is very low compared with other metals. Therefore, utilization of typical extraction processes such as solvent extraction is problematic in scandium extraction. The Adsorption/desorption method can be used, but it is challenging to prepare materials, which have good selectivity, high adsorption capacity, and high stability. Therefore, efficient and environmentally friendly methods for Sc extraction are needed. In this study, the nanoporous composite material was developed for extracting Sc from an Sc ore. The nanoporous composite material offers several advantageous properties such as large surface area, high chemical and mechanical stability, fast diffusion of the metals in the material and possibility to construct a filter out of the material with good flow-through properties. The nanoporous silicon material was produced by first stabilizing the surfaces with a silicon carbide layer and then functionalizing the surface with bisphosphonates that act as metal chelators. The surface area and porosity of the material were characterized by N₂ adsorption and the morphology was studied by scanning electron microscopy (SEM). The bisphosphonate content of the material was studied by thermogravimetric analysis (TGA). The concentration of metal ions in the adsorption/desorption experiments was measured with inductively coupled plasma mass spectrometry (ICP-MS). The maximum capacity of the material was 25 µmol/g Sc at pH=1 and 45 µmol/g Sc at pH=3, obtained from adsorption isotherm. The selectivity of the material towards Sc in artificial solutions containing several metal ions was studied at pH one and pH 3. The result shows good selectivity of the nanoporous composite towards adsorption of Sc. Scandium was less efficiently adsorbed from solution leached from the ore of Sc because of excessive amounts of iron (Fe), aluminum (Al) and titanium (Ti) which disturbed the adsorption process. For example, the concentration of Fe was more than 4500 ppm, while the concentration of Sc was only three ppm, approximately 1500 times lower. Precipitation methods were developed to lower the concentration of the metals other than Sc. Optimal pH for precipitation was found to be pH 4. The concentration of Fe, Al and Ti were decreased by 99, 70, 99.6%, respectively, while the concentration of Sc decreased only 22%. Despite the large reduction in the concentration of other metals, more work is needed to further increase the relative concentration of Sc compared with other metals to efficiently extract it using the developed nanoporous composite material. Nevertheless, the developed material may provide an affordable, efficient and environmentally friendly method to extract Sc on a large scale.

Keywords: adsorption, nanoporous silicon, ore solution, scandium

Procedia PDF Downloads 146
520 Individual Cylinder Ignition Advance Control Algorithms of the Aircraft Piston Engine

Authors: G. Barański, P. Kacejko, M. Wendeker

Abstract:

The impact of the ignition advance control algorithms of the ASz-62IR-16X aircraft piston engine on a combustion process has been presented in this paper. This aircraft engine is a nine-cylinder 1000 hp engine with a special electronic control ignition system. This engine has two spark plugs per cylinder with an ignition advance angle dependent on load and the rotational speed of the crankshaft. Accordingly, in most cases, these angles are not optimal for power generated. The scope of this paper is focused on developing algorithms to control the ignition advance angle in an electronic ignition control system of an engine. For this type of engine, i.e. radial engine, an ignition advance angle should be controlled independently for each cylinder because of the design of such an engine and its crankshaft system. The ignition advance angle is controlled in an open-loop way, which means that the control signal (i.e. ignition advance angle) is determined according to the previously developed maps, i.e. recorded tables of the correlation between the ignition advance angle and engine speed and load. Load can be measured by engine crankshaft speed or intake manifold pressure. Due to a limited memory of a controller, the impact of other independent variables (such as cylinder head temperature or knock) on the ignition advance angle is given as a series of one-dimensional arrays known as corrective characteristics. The value of the ignition advance angle specified combines the value calculated from the primary characteristics and several correction factors calculated from correction characteristics. Individual cylinder control can proceed in line with certain indicators determined from pressure registered in a combustion chamber. Control is assumed to be based on the following indicators: maximum pressure, maximum pressure angle, indicated mean effective pressure. Additionally, a knocking combustion indicator was defined. Individual control can be applied to a single set of spark plugs only, which results from two fundamental ideas behind designing a control system. Independent operation of two ignition control systems – if two control systems operate simultaneously. It is assumed that the entire individual control should be performed for a front spark plug only and a rear spark plug shall be controlled with a fixed (or specific) offset relative to the front one or from a reference map. The developed algorithms will be verified by simulation and engine test sand experiments. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: algorithm, combustion process, radial engine, spark plug

Procedia PDF Downloads 293
519 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays

Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir

Abstract:

Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.

Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis

Procedia PDF Downloads 113
518 Upgrading of Bio-Oil by Bio-Pd Catalyst

Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood

Abstract:

This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.

Keywords: bio-oil, catalyst, palladium, upgrading

Procedia PDF Downloads 175
517 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation

Authors: Onur Ozveri, Korkut Karabag, Cagri Keles

Abstract:

It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.

Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance

Procedia PDF Downloads 194
516 Reaching New Levels: Using Systems Thinking to Analyse a Major Incident Investigation

Authors: Matthew J. I. Woolley, Gemma J. M. Read, Paul M. Salmon, Natassia Goode

Abstract:

The significance of high consequence, workplace failures within construction continues to resonate with a combined average of 12 fatal incidents occurring daily throughout Australia, the United Kingdom, and the United States. Within the Australian construction domain, more than 35 serious, compensable injury incidents are reported daily. These alarming figures, in conjunction with the continued occurrence of fatal and serious, occupational injury incidents globally suggest existing approaches to incident analysis may not be achieving required injury prevention outcomes. One reason may be that, incident analysis methods used in construction have not kept pace with advances in the field of safety science and are not uncovering the full range system-wide contributory factors that are required to achieve optimal levels of construction safety performance. Another reason underpinning this global issue may also be the absence of information surrounding the construction operating and project delivery system. For example, it is not clear who shares the responsibility for construction safety in different contexts. To respond to this issue, to the author’s best knowledge, a first of its kind, control structure model of the construction industry is presented and then used to analyse a fatal construction incident. The model was developed by applying and extending the Systems Theoretic and Incident Model and Process method to hierarchically represent the actors, constraints, feedback mechanisms, and relationships that are involved in managing construction safety performance. The Causal Analysis based on Systems Theory (CAST) method was then used to identify the control and feedback failures involved in the fatal incident. The conclusions from the Coronial investigation into the event are compared with the findings stemming from the CAST analysis. The CAST analysis highlighted additional issues across the construction system that were not identified in the coroner’s recommendations, suggested there is a potential benefit in applying a systems theory approach to incident analysis in construction. The findings demonstrate the utility applying systems theory-based methods to the analysis of construction incidents. Specifically, this study shows the utility of the construction control structure and the potential benefits for project leaders, construction entities, regulators, and construction clients in controlling construction performance.

Keywords: construction project management, construction performance, incident analysis, systems thinking

Procedia PDF Downloads 128
515 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans

Authors: Jian Wu

Abstract:

Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.

Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory

Procedia PDF Downloads 186
514 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done

Authors: Geir Kirkebøen

Abstract:

Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.

Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis

Procedia PDF Downloads 329
513 Molecular Characterization of Two Thermoplastic Biopolymer-Degrading Fungi Utilizing rRNA-Based Technology

Authors: Nuha Mansour Alhazmi, Magda Mohamed Aly, Fardus M. Bokhari, Ahmed Bahieldin, Sherif Edris

Abstract:

Out of 30 fungal isolates, 2 new isolates were proven to degrade poly-β-hydroxybutyrate (PHB). Enzyme assay for these isolates indicated the optimal environmental conditions required for depolymerase enzyme to induce the highest level of biopolymer degradation. The two isolates were basically characterized at the morphological level as Trichoderma asperellum (isolate S1), and Aspergillus fumigates (isolate S2) using standard approaches. The aim of the present study was to characterize these two isolates at the molecular level based on the highly diverged rRNA gene(s). Within this gene, two domains of the ribosome large subunit (LSU) namely internal transcribed spacer (ITS) and 26S were utilized in the analysis. The first domain comprises the ITS1/5.8S/ITS2 regions ( > 500 bp), while the second domain comprises the D1/D2/D3 regions ( > 1200 bp). Sanger sequencing was conducted at Macrogen (Inc.) for the two isolates using primers ITS1/ITS4 for the first domain, while primers LROR/LR7 for the second domain. Sizes of the first domain ranged between 594-602 bp for S1 isolate and 581-594 bp for S2 isolate, while those of the second domain ranged between 1228-1238 bp for S1 isolate and 1156-1291 for S2 isolate. BLAST analysis indicated 99% identities of the first domain of S1 isolate with T. asperellum isolates XP22 (ID: KX664456.1), CTCCSJ-G-HB40564 (ID: KY750349.1), CTCCSJ-F-ZY40590 (ID: KY750362.1) and TV (ID: KU341015.1). BLAST of the first domain of S2 isolate indicated 100% identities with A. fumigatus isolate YNCA0338 (ID: KP068684.1) and strain MEF-Cr-6 (ID: KU597198.1), while 99% identities with A. fumigatus isolate CCA101 (ID: KT877346.1) and strain CD1621 (ID: JX092088.1). Large numbers of other T. asperellum and A. fumigatus isolates and strains showed high level of identities with S1 and S2 isolates, respectively, based on the diversity of the first domain. BLAST of the second domain of S1 isolate indicated 99 and 100% identities with only two strains of T. asperellum namely TR 3 (ID: HM466685.1) and G (ID: KF723005.1), respectively. However, other T. species (ex., atroviride, hamatum, deliquescens, harzianum, etc.) also showed high level of identities. BLAST of the second domain of S2 isolate indicated 100% identities with A. fumigatus isolate YNCA0338 (ID: KP068684.1) and strain MEF-Cr-6 (ID: KU597198.1), while 99% identities with A. fumigatus isolate CCA101 (ID: KT877346.1) and strain CD1621 (ID: JX092088.1). Large numbers of other A. fumigatus isolates and strains showed high level of identities with S2 isolate. Overall, the results of molecular characterization based on rRNA diversity for the two isolates of T. asperellum and A. fumigatus matched those obtained by morphological characterization. In addition, ITS domain proved to be more sensitive than 26S domain in diversity profiling of fungi at the species level.

Keywords: Aspergillus fumigates, Trichoderma asperellum, PHB, degradation, BLAST, ITS, 26S, rRNA

Procedia PDF Downloads 159
512 Leadership Lessons from Female Executives in the South African Oil Industry

Authors: Anthea Carol Nefdt

Abstract:

In this article, observations are drawn from a number of interviews conducted with female executives in the South African Oil Industry in 2017. Globally, the oil industry represents one of the most male-dominated organisational structures as well as cultures in the business world. Some of the remarkable women, who hold upper management positions, have not only emerged from the science and finance spheres (equally gendered organisations) but also navigated their way through an aggressive, patriarchal atmosphere of rivalry and competition. We examine various mythology associated with the industry, such as the cowboy myth, the frontier ideology and the queen bee syndrome directed at female executives. One of the themes to emerge from my interviews was the almost unanimous rejection of the ‘glass ceiling’ metaphor favoured by some Feminists. The women of the oil industry rather affirmed a picture of their rise to leadership positions through a strategic labyrinth of challenges and obstacles both in terms of gender and race. This article aims to share the insights of women leaders in a complex industry through both their reflections and a theoretical Feminist lens. The study is located within the South African context and given our historical legacy, it was optimal to use an intersectional approach which would allow issues of race, gender, ethnicity and language to emerge. A qualitative research methodological approach was employed as well as a thematic interpretative analysis to analyse and interpret the data. This research methodology was used precisely because it encourages and acknowledged the experiences women have and places these experiences at the centre of the research. Multiple methods of recruitment of the research participants was utilised. The initial method of recruitment was snowballing sampling, the second method used was purposive sampling. In addition to this, semi-structured interviews gave the participants an opportunity to ask questions, add information and have discussions on issues or aspects of the research area which was of interest to them. One of the key objectives of the study was to investigate if there was a difference in the leadership styles of men and women. Findings show that despite the wealth of literature on the topic, to the contrary some women do not perceive a significant difference in men and women’s leadership style. However other respondents felt that there were some important differences in the experiences of men and women superiors although they hesitated to generalise from these experiences Further findings suggest that although the oil industry provides unique challenges to women as a gendered organization, it also incorporates various progressive initiatives for their advancement.

Keywords: petroleum industry, gender, feminism, leadership

Procedia PDF Downloads 157
511 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 674
510 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India

Authors: Vinu Elias Jacob, Manoj Kumar Kini

Abstract:

Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.

Keywords: disaster management, resilience, spatial planning, spatial transformations

Procedia PDF Downloads 296
509 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 195
508 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”

Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani

Abstract:

The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.

Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density

Procedia PDF Downloads 124
507 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 57
506 Assessing the Effectiveness of Warehousing Facility Management: The Case of Mantrac Ghana Limited

Authors: Kuhorfah Emmanuel Mawuli

Abstract:

Generally, for firms to enhance their operational efficiency of logistics, it is imperative to assess the logistics function. The cost of logistics conventionally represents a key consideration in the pricing decisions of firms, which suggests that cost efficiency in logistics can go a long way to improve margins. Warehousing, which is a key part of logistics operations, has the prospect of influencing operational efficiency in logistics management as well as customer value, but this potential has often not been recognized. It has been found that there is a paucity of research that evaluates the efficiency of warehouses. Indeed, limited research has been conducted to examine potential barriers to effective warehousing management. Due to this paucity of research, there is limited knowledge on how to address the obstacles associated with warehousing management. In order for warehousing management to become profitable, there is the need to integrate, balance, and manage the economic inputs and outputs of the entire warehouse operations, something that many firms tend to ignore. Management of warehousing is not solely related to storage functions. Instead, effective warehousing management requires such practices as maximum possible mechanization and automation of operations, optimal use of space and capacity of storage facilities, organization through "continuous flow" of goods, a planned system of storage operations, and safety of goods. For example, there is an important need for space utilization of the warehouse surface as it is a good way to evaluate the storing operation and pick items per hour. In the setting of Mantrac Ghana, not much knowledge regarding the management of the warehouses exists. The researcher has personally observed many gaps in the management of the warehouse facilities in the case organization Mantrac Ghana. It is important, therefore, to assess the warehouse facility management of the case company with the objective of identifying weaknesses for improvement. The study employs an in-depth qualitative research approach using interviews as a mode of data collection. Respondents in the study mainly comprised warehouse facility managers in the studied company. A total of 10 participants were selected for the study using a purposive sampling strategy. Results emanating from the study demonstrate limited warehousing effectiveness in the case company. Findings further reveal that the major barriers to effective warehousing facility management comprise poor layout, poor picking optimization, labour costs, and inaccurate orders; policy implications of the study findings are finally outlined.

Keywords: assessing, warehousing, facility, management

Procedia PDF Downloads 65
505 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS

Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana

Abstract:

Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).

Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI

Procedia PDF Downloads 113
504 Revisiting Historical Illustrations in the Age of Digital Anatomy Education

Authors: Julia Wimmers-Klick

Abstract:

In the contemporary study of anatomy, medical students utilize a diverse array of resources, including lab handouts, lectures, and, increasingly, digital media such as interactive anatomy apps and digital images. Notably, a significant shift has occurred, with fewer students possessing traditional anatomy atlases or books, reflecting a broader trend towards digital approaches like Virtual Reality, Augmented Reality, and web-based programs. This paper seeks to explore the evolution of anatomy education by contrasting current digital tools with historical resources, such as classical anatomical illustrations and atlases, to assess their relevance and potential benefits in modern medical education. Through a comprehensive literature review, the development of anatomical illustrations is traced from the textual descriptions of Galen to the detailed and artistic representations of Da Vinci, Vesalius, and later anatomists. The examination includes how the printing press facilitated the dissemination of anatomical knowledge, transforming covert dissections into public spectacles and formalized teaching practices. Historical illustrations, often influenced by societal, religious, and aesthetic contexts, not only served educational purposes but also reflected the prevailing medical knowledge and ethical standards of their times. Critical questions are raised about the place of historical illustrations in today's anatomy curriculum. Specifically, their potential to teach critical thinking, highlight the history of medicine, and offer unique insights into past societal conditions are explored. These resources are viewed in their context, including the lack of diversity and the presence of ethical concerns, such as the use of illustrations from unethical sources like Pernkopf’s atlas. In conclusion, while digital tools offer innovative ways to visualize and interact with anatomical structures, historical illustrations provide irreplaceable value in understanding the evolution of medical knowledge and practice. The study advocates for a balanced approach that integrates traditional and modern resources to enrich medical education, promote critical thinking, and provide a comprehensive understanding of anatomy. Future research should investigate the optimal combination of these resources to meet the evolving needs of medical learners and the implications of the digital shift in anatomy education.

Keywords: human anatomy, historical illustrations, historical context, medical education

Procedia PDF Downloads 21
503 Common Caper (Capparis Spinosa L.) From Oblivion and Neglect to the Interface of Medicinal Plants

Authors: Ahmad Alsheikh Kaddour

Abstract:

Herbal medicine has been a long-standing phenomenon in Arab countries since ancient times because of its breadth and moderate temperament. Therefore, it possesses a vast natural and economic wealth of medicinal and aromatic herbs. This prompted ancient Egyptians and Arabs to discover and exploit them. The economic importance of the plant is not only from medicinal uses; it is a plant of high economic value for its various uses, especially in food, cosmetic and aromatic industries. It is also an ornamental plant and soil stabilization. The main objective of this research is to study the chemical changes that occur in the plant during the growth period, as well as the production of plant buds, which were previously considered unwanted plants. The research was carried out in the period 2021-2022 in the valley of Al-Shaflah (common caper), located in Qumhana village, 7 km north of Hama Governorate, Syria. The results of the research showed a change in the percentage of chemical components in the plant parts. The ratio of protein content and the percentage of fatty substances in fruits and the ratio of oil in the seeds until the period of harvesting of these plant parts improved, but the percentage of essential oils decreased with the progress of the plant growth, while the Glycosides content where improved with the plant aging. The production of buds is small, with dimensions as 0.5×0.5 cm, which is preferred for commercial markets, harvested every 2-3 days in quantities ranging from 0.4 to 0.5 kg in one cut/shrubs with 3 years’ age as average for the years 2021-2022. The monthly production of a shrub is between 4-5 kg per month. The productive period is 4 months approximately. This means that the seasonal production of one plant is 16-20 kg and the production of 16-20 tons per year with a plant density of 1,000 shrubs per hectare, which is the optimum rate of cultivation in the unit of mass, given the price of a kg of these buds is equivalent to 1 US $; however, this means that the annual output value of the locally produced hectare ranges from 16,000 US $ to 20,000 US $ for farmers. The results showed that it is possible to transform the cultivation of this plant from traditional random to typical areas cultivation, with a plant density of 1,000-1,100 plants per hectare according to the type of soil to obtain production of medicinal and nutritious buds, as well as, the need to pay attention to this national wealth and invest in the optimal manner, which leads to the acquisition of hard currency through export to support the national income.

Keywords: common caper, medicinal plants, propagation, medical, economic importance

Procedia PDF Downloads 72
502 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids

Authors: Ayalew Yimam Ali

Abstract:

The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement

Procedia PDF Downloads 21
501 Effect of Enzymatic Hydrolysis and Ultrasounds Pretreatments on Biogas Production from Corn Cob

Authors: N. Pérez-Rodríguez, D. García-Bernet, A. Torrado-Agrasar, J. M. Cruz, A. B. Moldes, J. M. Domínguez

Abstract:

World economy is based on non-renewable, fossil fuels such as petroleum and natural gas, which entails its rapid depletion and environmental problems. In EU countries, the objective is that at least 20% of the total energy supplies in 2020 should be derived from renewable resources. Biogas, a product of anaerobic degradation of organic substrates, represents an attractive green alternative for meeting partial energy needs. Nowadays, trend to circular economy model involves efficiently use of residues by its transformation from waste to a new resource. In this sense, characteristics of agricultural residues (that are available in plenty, renewable, as well as eco-friendly) propitiate their valorisation as substrates for biogas production. Corn cob is a by-product obtained from maize processing representing 18 % of total maize mass. Corn cob importance lies in the high production of this cereal (more than 1 x 109 tons in 2014). Due to its lignocellulosic nature, corn cob contains three main polymers: cellulose, hemicellulose and lignin. Crystalline, highly ordered structures of cellulose and lignin hinders microbial attack and subsequent biogas production. For the optimal lignocellulose utilization and to enhance gas production in anaerobic digestion, materials are usually submitted to different pretreatment technologies. In the present work, enzymatic hydrolysis, ultrasounds and combination of both technologies were assayed as pretreatments of corn cob for biogas production. Enzymatic hydrolysis pretreatment was started by adding 0.044 U of Ultraflo® L feruloyl esterase per gram of dry corncob. Hydrolyses were carried out in 50 mM sodium-phosphate buffer pH 6.0 with a solid:liquid proportion of 1:10 (w/v), at 150 rpm, 40 ºC and darkness for 3 hours. Ultrasounds pretreatment was performed subjecting corn cob, in 50 mM sodium-phosphate buffer pH 6.0 with a solid: liquid proportion of 1:10 (w/v), at a power of 750W for 1 minute. In order to observe the effect of the combination of both pretreatments, some samples were initially sonicated and then they were enzymatically hydrolysed. In terms of methane production, anaerobic digestion of the corn cob pretreated by enzymatic hydrolysis was positive achieving 290 L CH4 kg MV-1 (compared with 267 L CH4 kg MV-1 obtained with untreated corn cob). Although the use of ultrasound as the only pretreatment resulted detrimentally (since gas production decreased to 244 L CH4 kg MV-1 after 44 days of anaerobic digestion), its combination with enzymatic hydrolysis was beneficial, reaching the highest value (300.9 L CH4 kg MV-1). Consequently, the combination of both pretreatments improved biogas production from corn cob.

Keywords: biogas, corn cob, enzymatic hydrolysis, ultrasound

Procedia PDF Downloads 267
500 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory

Authors: Fouzia Brihmat

Abstract:

Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.

Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost

Procedia PDF Downloads 88
499 Kinetic Modelling of Fermented Probiotic Beverage from Enzymatically Extracted Annona Muricata Fruit

Authors: Calister Wingang Makebe, Wilson Ambindei Agwanande, Emmanuel Jong Nso, P. Nisha

Abstract:

Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1 as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated, and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.

Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics

Procedia PDF Downloads 80
498 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition

Authors: Karolina Mazur, Stanislaw Kuciel

Abstract:

Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.

Keywords: biocomposites, PLA, PHA, storage

Procedia PDF Downloads 265
497 Highly Conducting Ultra Nanocrystalline Diamond Nanowires Decorated ZnO Nanorods for Long Life Electronic Display and Photo-Detectors Applications

Authors: A. Saravanan, B. R. Huang, C. J. Yeh, K. C. Leou, I. N. Lin

Abstract:

A new class of ultra-nano diamond-graphite nano-hybrid (DGH) composite materials containing nano-sized diamond needles was developed at low temperature process. Such kind of diamond- graphite nano-hybrid composite nanowires exhibit high electrical conductivity and excellent electron field emission (EFE) properties. Few earlier reports mention that addition of N2 gas to the growth plasma requires high growth temperature (800°C) to trigger the dopants to generate the conductivity in the films. High growth temperature is not familiar with the Si-based device fabrications. We have used a novel process such as bias-enhanced-grown (beg) MPECVD process to grow diamond films at low substrate temperature (450°C). We observed that the beg-N/UNCD films thus obtained possess high conductivity of σ=987 S/cm, ever reported for diamond films with excellent Electron field emission (EFE) properties. TEM investigation indicated that these films contain needle-like diamond grains about 5 nm in diameter and hundreds of nanometers in length. Each of the grains was encased in graphitic layers about tens of nano-meters in thickness. These materials properties suitable for more specific applications, such as high conductivity for electron field emitters, high robustness for microplasma cathodes and high electrochemical activity for electro-chemical sensing. Subsequently, other hand, the highly conducting DGH films were coated on vertically aligned ZnO nanorods, there is no prior nucleation or seeding process needed due to the use of BEG method. Such a composite structure provides significant enhancement in the field emission characteristics of the cold cathode was observed with ultralow turn on voltage 1.78 V/μm with high EFE current density of 3.68 mA/ cm2 (at 4.06V/μm) due to decoration of DGH material on ZnO nanorods. The DGH/ZNRs based device get stable emission for longer duration of 562min than bare ZNRs (104min) without any current degradation because the diamond coating protects the ZNRs from ion bombardment when they are used as the cathode for microplasma devices. The potential application of these materials is demonstrated by the plasma illumination measurements that ignited the plasma at the minimum voltage by 290 V. The photoresponse (Iphoto/Idark) behavior of the DGH/ZNRs based photodetectors exhibits a much higher photoresponse (1202) than bare ZNRs (229). During the process the electron transport is easy from ZNRs to DGH through graphitic layers, the EFE properties of these materials comparable to other primarily used field emitters like carbon nanotubes, graphene. The DGH/ZNRs composite also providing a possibility of their use in flat panel, microplasma and vacuum microelectronic devices.

Keywords: bias-enhanced nucleation and growth, ZnO nanorods, electrical conductivity, electron field emission, photo-detectors

Procedia PDF Downloads 370