Search results for: change frequency
1469 Superiority of Bone Marrow Derived-Osteoblastic Cells (ALLOB®) over Bone Marrow Derived-Mesenchymal Stem Cells
Authors: Sandra Pietri, Helene Dubout, Sabrina Ena, Candice Hoste, Enrico Bastianelli
Abstract:
Bone Therapeutics is a bone cell therapy company addressing high unmet medical needs in the field of bone fracture repair, more specifically in non-union and delayed-union fractures where the bone repair process is impaired. The company has developed a unique allogeneic osteoblastic cell product (ALLOB®) derived from bone marrow which is currently tested in humans in the indication of delayed-union fractures. The purpose of our study was to directly compare ALLOB® vs. non-differentiated mesenchymal stem cells (MSC) for their in vitro osteogenic characteristics and their in vivo osteogenic potential in order to determine which cellular type would be the most adapted for bone fracture repair. Methods: Healthy volunteers’ bone marrow aspirates (n=6) were expended (i) into BM-MSCs using a complete MSC culture medium or (ii) into ALLOB® cells according to its manufacturing process. Cells were characterized in vitro by morphology, immunophenotype, gene expression and differentiation potential. Additionally, their osteogenic potential was assessed in vivo in the subperiosteal calvaria bone formation model in nude mice. Results: The in vitro side-by-side comparison studies showed that although ALLOB® and BM-MSC shared some common general characteristics such as the 3 minimal MSC criteria, ALLOB® expressed significantly higher levels of chondro/osteoblastic genes such as BMP2 (fold change (FC) > 100), ALPL (FC > 12), CBFA1 (FC > 3) and differentiated significantly earlier than BM-MSC toward the osteogenic lineage. Moreover the bone formation model in nude mice demonstrated that used at the same cellular concentration, ALLOB® was able to induce significantly more (160% vs.107% for control animals) bone formation than BM-MSC (118% vs. 107 % for control animals) two weeks after administration. Conclusion: Our side-by-side comparison studies demonstrated that in vitro and in vivo, ALLOB® displays superior osteogenic capacity to BM-MScs and is therefore a better candidate for the treatment of bone fractures.Keywords: gene expression, histomorphometry, mesenchymal stem cells, osteogenic differentiation potential, preclinical
Procedia PDF Downloads 3301468 Buddhism: Its Socio-Economic Relevance in the Present Changing World
Authors: Bandana Bhattacharya
Abstract:
‘Buddhism’, as such signifies the ‘ism’ that is based on Buddha’s life and teachings or that is concerned with the gospel of Buddha as recorded in the literature available in Pali, Sanskrit, Buddhist Sanskrit, Prakrit and even in the other non-Indian languages wherein it has been described a very abstruse, complex and lofty philosophy of life or ‘the way of life’ preached by Him (Buddha). It has another side too, i.e., the applicability of the tenets of Buddha according to the needs of the present society, where human life and outlook has been totally changed. Applied Buddhism signifies the applicability of the Buddha’s noble tenets. Along with the theological exposition and textual criticism of the Buddha’s discourses, it has now become almost obligatory for the Buddhist scholars to re-interpret Buddhism from modern perspectives. Basically Applied Buddhism defined a ‘way of life’ which may transform the higher quality of life or essence of life due to changed circumstances, places and time. Nowadays, if we observe the present situation of the world, we will find the current problems such as health, economic, politic, global warming, population explosion, pollution of all types including cultural scarcity essential commodities and indiscriminate use of human, natural and water resources are becoming more and more pronounced day by day, under such a backdrop of world situation. Applied Buddhism rather Buddhism may be the only instrument left now for mankind to address all such human achievements, lapses, and problems. Buddha’s doctrine is itself called ‘akālika, timeless’. On the eve of the Mahāparinibbāṇa at Kusinara, the Blessed One allows His disciples to change, modify and alter His minor teachings according to the needs of the future, although He has made some utterances, which would eternally remain fresh. Hence Buddhism has been able to occupy a prominent place in modern life, because of its timeless applicability, emanating from a set of eternal values. The logical and scientific outlook of Buddha may be traced in His very first sermon named the Dhammacakkapavattana-Sutta where He suggested to avoid the two extremes, namely, constantly attachment to sensual pleasures (Kāmasukhallikānuyoga) and devotion to self-mortification that is painful as well as unprofitable and asked to adopt Majjhimapaṭipadā, ‘Middle path’, which is very much applicable even today in every spheres of human life; and the absence of which is the root cause of all problems event at present. This paper will be a humble attempt to highlight the relevance of Buddhism in the present society.Keywords: applied Buddhism, ecology, self-awareness, value
Procedia PDF Downloads 1241467 Assessment of the Effect of Orally Administered Itopride on Gall Bladder Ejection Fraction by a Fatty Meal Cholescintigraphy in Patients with Diabetes
Authors: Avani Jain, Hasmukh Jain, S. Shelley, M. Indirani, Shilpa Kalal, Jayakanth Amalachandran
Abstract:
Aim of the Study: To assess the effect of orally administered Itopride on gall bladder ejection fraction by fatty meal cholescintigraphy in patients with diabetes. Materials and Methods: Thirty patients (20 males, 10 females, mean age 46+10 yrs) with history of diabetes mellitus (mean duration 4.8+4.1 yrs, fasting blood glucose level 130+35 mg/dl and 2-hours post-prandial blood glucose level 196+76 mg/dl) and found to have gall bladder dysfunction on fatty-meal stimulated cholescintigraphy were selected for this study. These patients underwent a repeat cholescintigraphy similar to baseline study, with 50 mg of Itopride orally along with fatty meal. Pre- and post-Itopride GBEF were then compared to assess the effect of Itopride on gall bladder contraction. Results: Out of these 30 patients, 2 had dyskinetic, 4 had akinetic, 22 had moderately hypokinetic and the remaining 2 had hypokinetic gall bladder function in the baseline study with > 60% GBEF being taken as the normal value. Mean percentage of GBEF in the baseline study was 32%+13% and the mean percentage of GBEF in the post-Itopride study was 57%+17% with change in mean percentage of GBEF being 24%+21%. GBEF of the “baseline study” was significantly lower as compared to GBEF in the “post-Itopride study” (p < 0.05). Conclusion: Diabetic patients with biliary-type pain often tend to have impaired gallbladder function. Cholescintigraphy with fatty meal-stimulation is a simple, cheap and useful investigation for assessment of gallbladder dysfunction in these patients, before any structural changes occur within the lumen or wall of the gall bladder. Improvement in gallbladder ejection fraction after oral administration of a single dose of Itopride, a newer prokinetic drug with fewer side effects, as assessed by cholescintigraphy, provides enough evidence of future therapeutic response. Administration of Itopride, in therapeutic dosage, therefore may be expected to cause significant improvement in gallbladder ejection fraction and hence prolong stone formation within the gall bladder and also prevent the associated long term complications. Hence, based on scintigraphic evidence, Itopride may be recommended, by clinicians, for management of symptomatic diabetic patients having gallbladder dysfunction.Keywords: itopride, gall bladder ejection fraction, fatty meal, cholescintigraphy, diabetes
Procedia PDF Downloads 4231466 Effectiveness of Self-Learning Module on the Academic Performance of Students in Statistics and Probability
Authors: Aneia Rajiel Busmente, Renato Gunio Jr., Jazin Mautante, Denise Joy Mendoza, Raymond Benedict Tagorio, Gabriel Uy, Natalie Quinn Valenzuela, Ma. Elayza Villa, Francine Yezha Vizcarra, Sofia Madelle Yapan, Eugene Kurt Yboa
Abstract:
COVID-19’s rapid spread caused a dramatic change in the nation, especially the educational system. The Department of Education was forced to adopt a practical learning platform without neglecting health, a printed modular distance learning. The Philippines' K–12 curriculum includes Statistics and Probability as one of the key courses as it offers students the knowledge to evaluate and comprehend data. Due to student’s difficulty and lack of understanding of the concepts of Statistics and Probability in Normal Distribution. The Self-Learning Module in Statistics and Probability about the Normal Distribution created by the Department of Education has several problems, including many activities, unclear illustrations, and insufficient examples of concepts which enables learners to have a difficulty accomplishing the module. The purpose of this study is to determine the effectiveness of self-learning module on the academic performance of students in the subject Statistics and Probability, it will also explore students’ perception towards the quality of created Self-Learning Module in Statistics and Probability. Despite the availability of Self-Learning Modules in Statistics and Probability in the Philippines, there are still few literatures that discuss its effectiveness in improving the performance of Senior High School students in Statistics and Probability. In this study, a Self-Learning Module on Normal Distribution is evaluated using a quasi-experimental design. STEM students in Grade 11 from National University's Nazareth School will be the study's participants, chosen by purposive sampling. Google Forms will be utilized to find at least 100 STEM students in Grade 11. The research instrument consists of 20-item pre- and post-test to assess participants' knowledge and performance regarding Normal Distribution, and a Likert scale survey to evaluate how the students perceived the self-learning module. Pre-test, post-test, and Likert scale surveys will be utilized to gather data, with Jeffreys' Amazing Statistics Program (JASP) software being used for analysis.Keywords: self-learning module, academic performance, statistics and probability, normal distribution
Procedia PDF Downloads 1111465 Hot Carrier Photocurrent as a Candidate for an Intrinsic Loss in a Single Junction Solar Cell
Authors: Jonas Gradauskas, Oleksandr Masalskyi, Ihor Zharchenko
Abstract:
The advancement in improving the efficiency of conventional solar cells toward the Shockley-Queisser limit seems to be slowing down or reaching a point of saturation. The challenges hindering the reduction of this efficiency gap can be categorized into extrinsic and intrinsic losses, with the former being theoretically avoidable. Among the five intrinsic losses, two — the below-Eg loss (resulting from non-absorption of photons with energy below the semiconductor bandgap) and thermalization loss —contribute to approximately 55% of the overall lost fraction of solar radiation at energy bandgap values corresponding to silicon and gallium arsenide. Efforts to minimize the disparity between theoretically predicted and experimentally achieved efficiencies in solar cells necessitate the integration of innovative physical concepts. Hot carriers (HC) present a contemporary approach to addressing this challenge. The significance of hot carriers in photovoltaics is not fully understood. Although their excessive energy is thought to indirectly impact a cell's performance through thermalization loss — where the excess energy heats the lattice, leading to efficiency loss — evidence suggests the presence of hot carriers in solar cells. Despite their exceptionally brief lifespan, tangible benefits arise from their existence. The study highlights direct experimental evidence of hot carrier effect induced by both below- and above-bandgap radiation in a singlejunction solar cell. Photocurrent flowing across silicon and GaAs p-n junctions is analyzed. The photoresponse consists, on the whole, of three components caused by electron-hole pair generation, hot carriers, and lattice heating. The last two components counteract the conventional electron-hole generation-caused current required for successful solar cell operation. Also, a model of the temperature coefficient of the voltage change of the current–voltage characteristic is used to obtain the hot carrier temperature. The distribution of cold and hot carriers is analyzed with regard to the potential barrier height of the p-n junction. These discoveries contribute to a better understanding of hot carrier phenomena in photovoltaic devices and are likely to prompt a reevaluation of intrinsic losses in solar cells.Keywords: solar cell, hot carriers, intrinsic losses, efficiency, photocurrent
Procedia PDF Downloads 641464 Evaluation of Water Management Options to Improve the Crop Yield and Water Productivity for Semi-Arid Watershed in Southern India Using AquaCrop Model
Authors: V. S. Manivasagam, R. Nagarajan
Abstract:
Modeling the soil, water and crop growth interactions are attaining major importance, considering the future climate change and water availability for agriculture to meet the growing food demand. Progress in understanding the crop growth response during water stress period through crop modeling approach provides an opportunity for improving and sustaining the future agriculture water use efficiency. An attempt has been made to evaluate the potential use of crop modeling approach for assessing the minimal supplementary irrigation requirement for crop growth during water limited condition and its practical significance in sustainable improvement of crop yield and water productivity. Among the numerous crop models, water driven-AquaCrop model has been chosen for the present study considering the modeling approach and water stress impact on yield simulation. The study has been evaluated in rainfed maize grown area of semi-arid Shanmuganadi watershed (a tributary of the Cauvery river system) located in southern India during the rabi cropping season (October-February). In addition to actual rainfed maize growth simulation, irrigated maize scenarios were simulated for assessing the supplementary irrigation requirement during water shortage condition for the period 2012-2015. The simulation results for rainfed maize have shown that the average maize yield of 0.5-2 t ha-1 was observed during deficit monsoon season (<350 mm) whereas 5.3 t ha-1 was noticed during sufficient monsoonal period (>350 mm). Scenario results for irrigated maize simulation during deficit monsoonal period has revealed that 150-200 mm of supplementary irrigation has ensured the 5.8 t ha-1 of irrigated maize yield. Thus, study results clearly portrayed that minimal application of supplementary irrigation during the critical growth period along with the deficit rainfall has increased the crop water productivity from 1.07 to 2.59 kg m-3 for major soil types. Overall, AquaCrop is found to be very effective for the sustainable irrigation assessment considering the model simplicity and minimal inputs requirement.Keywords: AquaCrop, crop modeling, rainfed maize, water stress
Procedia PDF Downloads 2641463 Is Electricity Consumption Stationary in Turkey?
Authors: Eyup Dogan
Abstract:
The number of research articles analyzing the integration properties of energy variables has rapidly increased in the energy literature for about a decade. The stochastic behaviors of energy variables are worth knowing due to several reasons. For instance, national policies to conserve or promote energy consumption, which should be taken as shocks to energy consumption, will have transitory effects in energy consumption if energy consumption is found to be stationary in one country. Furthermore, it is also important to know the order of integration to employ an appropriate econometric model. Despite being an important subject for applied energy (economics) and having a huge volume of studies, several known limitations still exist with the existing literature. For example, many of the studies use aggregate energy consumption and national level data. In addition, a huge part of the literature is either multi-country studies or solely focusing on the U.S. This is the first study in the literature that considers a form of energy consumption by sectors at sub-national level. This research study aims at investigating unit root properties of electricity consumption for 12 regions of Turkey by four sectors in addition to total electricity consumption for the purpose of filling the mentioned limits in the literature. In this regard, we analyze stationarity properties of 60 cases . Because the use of multiple unit root tests make the results robust and consistent, we apply Dickey-Fuller unit root test based on Generalized Least Squares regression (DFGLS), Phillips-Perron unit root test (PP) and Zivot-Andrews unit root test with one endogenous structural break (ZA). The main finding of this study is that electricity consumption is trend stationary in 7 cases according to DFGLS and PP, whereas it is stationary process in 12 cases when we take into account the structural change by applying ZA. Thus, shocks to electricity consumption have transitory effects in those cases; namely, agriculture in region 1, region 4 and region 7, industrial in region 5, region 8, region 9, region 10 and region 11, business in region 4, region 7 and region 9, total electricity consumption in region 11. Regarding policy implications, policies to decrease or stimulate the use of electricity have a long-run impact on electricity consumption in 80% of cases in Turkey given that 48 cases are non-stationary process. On the other hand, the past behavior of electricity consumption can be used to predict the future behavior of that in 12 cases only.Keywords: unit root, electricity consumption, sectoral data, subnational data
Procedia PDF Downloads 4101462 Multi-Indicator Evaluation of Agricultural Drought Trends in Ethiopia: Implications for Dry Land Agriculture and Food Security
Authors: Dawd Ahmed, Venkatesh Uddameri
Abstract:
Agriculture in Ethiopia is the main economic sector influenced by agricultural drought. A simultaneous assessment of drought trends using multiple drought indicators is useful for drought planning and management. Intra-season and seasonal drought trends in Ethiopia were studied using a suite of drought indicators. Standardized Precipitation Index (SPI), Standardized Precipitation Evapotranspiration Index (SPEI), Palmer Drought Severity Index (PDSI), and Z-index for long-rainy, dry, and short-rainy seasons are used to identify drought-causing mechanisms. The Statistical software package R version 3.5.2 was used for data extraction and data analyses. Trend analysis indicated shifts in late-season long-rainy season precipitation into dry in the southwest and south-central portions of Ethiopia. Droughts during the dry season (October–January) were largely temperature controlled. Short-term temperature-controlled hydrologic processes exacerbated rainfall deficits during the short rainy season (February–May) and highlight the importance of temperature- and hydrology-induced soil dryness on the production of short-season crops such as tef. Droughts during the long-rainy season (June–September) were largely driven by precipitation declines arising from the narrowing of the intertropical convergence zone (ITCZ). Increased dryness during long-rainy season had severe consequences on the production of corn and sorghum. PDSI was an aggressive indicator of seasonal droughts suggesting the low natural resilience to combat the effects of slow-acting, moisture-depleting hydrologic processes. The lack of irrigation systems in the nation limits the ability to combat droughts and improve agricultural resilience. There is an urgent need to monitor soil moisture (a key agro-hydrologic variable) to better quantify the impacts of meteorological droughts on agricultural systems in Ethiopia.Keywords: autocorrelation, climate change, droughts, Ethiopia, food security, palmer z-index, PDSI, SPEI, SPI, trend analysis
Procedia PDF Downloads 1401461 Methodological Approach for the Prioritization of Different Micro-Contaminants as Potential River Basin Specific Pollutants in the Upper Tisza River Watershed
Authors: Mihail Simion Beldean-Galea, Virginia Coman, Florina Copaciu, Mihaela Vlassa, Radu Mihaiescu, Adina Croitoru, Viorel Arghius, Modest Gertsiuk, Mikola Gertsiuk
Abstract:
Taking into consideration the huge number of chemicals released into environment compartments a proper environmental risk assessment is difficult to predict due to the gap of legislation and improper toxicological assessment of chemicals compounds. In Romania as well as in many other countries from Europe, the chemical status of the water body is characterized taking into consideration the Water Framework Directive (WFD) and the substances listed in Annex X. This Annex includes 45 substances from different classes of organic compounds and heavy metals for which AA-EQS and MAC-EQS have been established. For other compounds which are not included in Annex X, different methodologies to prioritize chemicals for risk assessment and monitoring has been proposed. These methodologies take into account Predicted No-Effect Concentrations (PNECs) of different classes of chemicals compounds available from existing risk assessments or from read-across models for acute toxicity to the standard test organisms such as Daphnia magna and Selenastrum capricornutum. Our work presents the monitoring results of 30 priority substances including polyaromatic hydrocarbons, pesticides, halogenated compounds, plasticizers and heavy metals and other 34 substances from different classes of pesticides and pharmaceuticals which are not included on the list of priority substances, performed in the Upper Tisza River Watershed from Romania and Ukraine. The obtained monitoring data were used for the establishment of the list of more relevant pollutants in the studied area and to establish the potential river basin specific pollutants. For this purpose, two indicators such as the Frequency of exceedance and Extent of exceedance of Predicted no-Effect Concentration (PNEC) were evaluated. These two indicators are based on maximum environmental concentrations (MECs) of priority substances and for other pollutants is use statistically based averages of obtained measured concentration compared to the lowest PNEC thresholds. From the obtained results it can be concluded that polyaromatic hydrocarbon such as Fluoranthene, Benzo[a]pyrene, Benzo[b]fluorathene, benzo[k]fluoranthene, Benzo(g.h.i)perylene, Indeno(1.2.3-cd)-pyrene, heavy metals such as Cadmium, Lead and Nickel can be considered as river basin specific pollutants, their concentration exceeding the Annual Average EQS concentration. Other compounds such as estrone, estriol, 174-β estradiol, naproxen or some antibiotics (Penicillin G, Tetracycline or Ceftazidime) should be taken into account for a long monitoring, in some cases their concentration exceeding PNEC. Acknowledgements: This work is performed in the frame of NATO SfP Programme, Project no. 984440.Keywords: prioritization, river basin specific pollutants, Tisza River, water framework directive
Procedia PDF Downloads 3031460 Design Thinking Activities: A Tool in Overcoming Student Reticence
Authors: Marinel Dayawon
Abstract:
Student participation in classroom activities is vital in the teaching- learning the process as it develops self-confidence, social relationships and good academic performance of students. It is the teacher’s empathetic manner and creativity to create solutions that encourage teamwork and mutual support while dropping the academic competition within the class that hinder every shy student to walk with courage and talk with conviction because they consider their ideas, weak, as compared to the bright students. This study aimed to explore the different design thinking strategies that will change the mindset of shy students in classroom activities, maximizing their participation in all given tasks while sharing their views through ideation and providing them a wider world through compromise agreement within the members of the group, sensitivity to one’s idea, thus, arriving at a collective decision in the development of a prototype that indicates improvement in their classroom involvement. The study used the qualitative type of research. Triangulation is done through participant observation, focus group discussion and interview, documented through photos and videos. The respondents were the second- year Bachelor of Secondary Education students of the Institute of Teacher Education at Isabela State University- Cauayan City Campus. The result of the study revealed that reticent students when involved in game activities through a slap and tap method, writing their clustered ideas, using sticky notes is excited in sharing ideas as it doesn’t use oral communication. It is also observed after three weeks of using the design thinking strategies; shy students volunteer as secretary, rapporteur or group leader in the team- building activities as it represents the ideas of the heterogeneous group, removing the individual identity of the ideas. Superior students learned to listen to the ideas of the reticent students and involved them in the prototyping process of designing a remediation program for high school students showing reticence in the classroom, making their experience as a benchmark. The strategies made a 360- degrees transformation of the shy students, producing their journal log, in their journey to being open. Thus, faculty members are now adopting the design thinking approach.Keywords: design thinking activities, qualitative, reticent students, Isabela, Philippines
Procedia PDF Downloads 2241459 Comparison of Cognitive Load in Virtual Reality and Conventional Simulation-Based Training: A Randomized Controlled Trial
Authors: Michael Wagner, Philipp Steinbauer, Andrea Katharina Lietz, Alexander Hoffelner, Johannes Fessler
Abstract:
Background: Cardiopulmonary resuscitations are stressful situations in which vital decisions must be made within seconds. Lack of routine due to the infrequency of pediatric emergencies can lead to serious medical and communication errors. Virtual reality can fundamentally change the way simulation training is conducted in the future. It appears to be a useful learning tool for technical and non-technical skills. It is important to investigate the use of VR in providing a strong sense of presence within simulations. Methods: In this randomized study, we will enroll doctors and medical students from the Medical University of Vienna, who will receive learning material regarding the resuscitation of a one-year-old child. The study will be conducted in three phases. In the first phase, 20 physicians and 20 medical students from the Medical University of Vienna will be included. They will perform simulation-based training with a standardized scenario of a critically ill child with a hypovolemic shock. The main goal of this phase is to establish a baseline for the following two phases to generate comparative values regarding cognitive load and stress. In phase 2 and 3, the same participants will perform the same scenario in a VR setting. In both settings, on three set points of progression, one of three predefined events is triggered. For each event, three different stress levels (easy, medium, difficult) will be defined. Stress and cognitive load will be analyzed using the NASA Task Load Index, eye-tracking parameters, and heart rate. Subsequently, these values will be compared between VR training and traditional simulation-based training. Hypothesis: We hypothesize that the VR training and the traditional training groups will not differ in physiological response (cognitive load, heart rate, and heart rate variability). We further assume that virtual reality training can be used as cost-efficient additional training. Objectives: The aim of this study is to measure cognitive load and stress level during a real-life simulation training and compare it with VR training in order to show that VR training evokes the same physiological response and cognitive load as real-life simulation training.Keywords: virtual reality, cognitive load, simulation, adaptive virtual reality training
Procedia PDF Downloads 1121458 A Fast Method for Graphene-Supported Pd-Co Nanostructures as Catalyst toward Ethanol Oxidation in Alkaline Media
Authors: Amir Shafiee Kisomi, Mehrdad Mofidi
Abstract:
Nowadays, fuel cells as a promising alternative for power source have been widely studied owing to their security, high energy density, low operation temperatures, renewable capability and low environmental pollutant emission. The nanoparticles of core-shell type could be widely described in a combination of a shell (outer layer material) and a core (inner material), and their characteristics are greatly conditional on dimensions and composition of the core and shell. In addition, the change in the constituting materials or the ratio of core to the shell can create their special noble characteristics. In this study, a fast technique for the fabrication of a Pd-Co/G/GCE modified electrode is offered. Thermal decomposition reaction of cobalt (II) formate salt over the surface of graphene/glassy carbon electrode (G/GCE) is utilized for the synthesis of Co nanoparticles. The nanoparticles of Pd-Co decorated on the graphene are created based on the following method: (1) Thermal decomposition reaction of cobalt (II) formate salt and (2) the galvanic replacement process Co by Pd2+. The physical and electrochemical performances of the as-prepared Pd-Co/G electrocatalyst are studied by Field Emission Scanning Electron Microscopy (FESEM), Energy Dispersive X-ray Spectroscopy (EDS), Cyclic Voltammetry (CV), and Chronoamperometry (CHA). Galvanic replacement method is utilized as a facile and spontaneous approach for growth of Pd nanostructures. The Pd-Co/G is used as an anode catalyst for ethanol oxidation in alkaline media. The Pd-Co/G not only delivered much higher current density (262.3 mAcm-2) compared to the Pd/C (32.1 mAcm-2) catalyst, but also demonstrated a negative shift of the onset oxidation potential (-0.480 vs -0.460 mV) in the forward sweep. Moreover, the novel Pd-Co/G electrocatalyst represents large electrochemically active surface area (ECSA), lower apparent activation energy (Ea), higher levels of durability and poisoning tolerance compared to the Pd/C catalyst. The paper demonstrates that the catalytic activity and stability of Pd-Co/G electrocatalyst are higher than those of the Pd/C electrocatalyst toward ethanol oxidation in alkaline media.Keywords: thermal decomposition, nanostructures, galvanic replacement, electrocatalyst, ethanol oxidation, alkaline media
Procedia PDF Downloads 1511457 Numerical Investigation of the Operating Parameters of the Vertical Axis Wind Turbine
Authors: Zdzislaw Kaminski, Zbigniew Czyz, Tytus Tulwin
Abstract:
This paper describes the geometrical model, algorithm and CFD simulation of an airflow around a Vertical Axis Wind Turbine rotor. A solver, ANSYS Fluent, was applied for the numerical simulation. Numerical simulation, unlike experiments, enables us to validate project assumptions when it is designed to avoid a costly preparation of a model or a prototype for a bench test. This research focuses on the rotor designed according to patent no PL 219985 with its blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on a regulation of blade angle α between the top and bottom parts of blades mounted on an axis. If angle α increases, the working surface which absorbs wind kinetic energy also increases. CFD calculations enable us to compare aerodynamic characteristics of forces acting on rotor working surfaces and specify rotor operation parameters like torque or turbine assembly power output. This paper is part of the research to improve an efficiency of a rotor assembly and it contains investigation of the impact of a blade angle of wind turbine working blades on the power output as a function of rotor torque, specific rotational speed and wind speed. The simulation was made for wind speeds ranging from 3.4 m/s to 6.2 m/s and blade angles of 30°, 60°, 90°. The simulation enables us to create a mathematical model to describe how aerodynamic forces acting each of the blade of the studied rotor are generated. Also, the simulation results are compared with the wind tunnel ones. This investigation enables us to estimate the growth in turbine power output if a blade angle changes. The regulation of blade angle α enables a smooth change in turbine rotor power, which is a kind of safety measures if the wind is strong. Decreasing blade angle α reduces the risk of damaging or destroying a turbine that is still in operation and there is no complete rotor braking as it is in other Horizontal Axis Wind Turbines. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: computational fluid dynamics, mathematical model, numerical analysis, power, renewable energy, wind turbine
Procedia PDF Downloads 3351456 Effects of Subsidy Reform on Consumption and Income Inequalities in Iran
Authors: Pouneh Soleimaninejadian, Chengyu Yang
Abstract:
In this paper, we use data on Household Income and Expenditure survey of Statistics Centre of Iran, conducted from 2005-2014, to calculate several inequality measures and to estimate the effects of Iran’s targeted subsidy reform act on consumption and income inequality. We first calculate Gini coefficients for income and consumption in order to study the relation between the two and also the effects of subsidy reform. Results show that consumption inequality has not been always mirroring changes in income inequality. However, both Gini coefficients indicate that subsidy reform caused improvement in inequality. Then we calculate Generalized Entropy Index based on consumption and income for years before and after the Subsidy Reform Act of 2010 in order to have a closer look into the changes in internal structure of inequality after subsidy reforms. We find that the improvement in income inequality is mostly caused by the decrease in inequality of lower income individuals. At the same time consumption inequality has been decreased as a result of more equal consumption in both lower and higher income groups. Moreover, the increase in Engle coefficient after the subsidy reform shows that a bigger portion of income is allocated to consumption on food which is a sign of lower living standard in general. This increase in Engle coefficient is due to rise in inflation rate and relative increase in price of food which partially is another consequence of subsidy reform. We have conducted some experiments on effect of subsidy payments and possible effects of change on distribution pattern and amount of cash subsidy payments on income inequality. Result of the effect of cash payments on income inequality shows that it leads to a definite decrease in income inequality and had a bigger share in improvement of rural areas compared to those of urban households. We also examine the possible effect of constant payments on the increasing income inequality for years after 2011. We conclude that reduction in value of payments as a result of inflation plays an important role regardless of the fact that there may be other reasons. We finally experiment with alternative allocations of transfers while keeping the total amount of cash transfers constant or make it smaller through eliminating three higher deciles from the cash payment program, the result shows that income equality would be improved significantly.Keywords: consumption inequality, generalized entropy index, income inequality, Irans subsidy reform
Procedia PDF Downloads 2331455 Assessment of Vehicular Emission and Its Impact on Urban Air Quality
Authors: Syed Imran Hussain Shah
Abstract:
Air pollution rapidly impacts the Earth's climate and environmental quality, causing public health nuisances and cardio-pulmonary illnesses. Air pollution is a global issue, and all population groups in all the regions in the developed and developing parts of the world were affected by it. The promise of a reduction in deaths and diseases as per SDG No. 3 is an international commitment towards sustainable development. In that context, assessing and evaluating the ambient air quality is paramount. This article estimates the air pollution released by the vehicles on roads of Lahore, a mega city having 13.98 million populations. A survey was conducted on different fuel stations to determine the estimated fuel pumped to different types of vehicles from different fuel stations. The number of fuel stations in Lahore is around 350. Another survey was also conducted to interview the drivers to know the per-litre fuel consumption of other vehicles. Therefore, a survey was conducted on 189 fuel stations and 400 drivers using a combination of random sampling and convenience sampling methods. The sampling was done in a manner to cover all areas of the city including central commercial hubs, modern housing societies, industrial zones, main highways, old traditional population centres, etc. Mathematical equations were also used to estimate the emissions from different modes of vehicles. Due to the increase in population, the number of vehicles is increasing, and consequently, traffic emissions were rising at a higher level. Motorcycles, auto rickshaws, motor cars, and vans were the main contributors to Carbon dioxide and vehicular emissions in the air. It has been observed that vehicles that use petrol fuel produce more Carbon dioxide emissions in the air. Buses and trucks were the main contributors to NOx in the air due to the use of diesel fuel. Whereas vans, buses, and trucks produce the maximum amount of SO2. PM10 and PM2.5 were mainly produced by motorcycles and motorcycle two-stroke rickshaws. Auto rickshaws and motor cars mainly produce benzene emissions. This study may act as a major tool for traffic and vehicle policy decisions to promote better fuel quality and more fuel-efficient vehicles to reduce emissions.Keywords: particulate matter, nitrogen dioxide, climate change, pollution control
Procedia PDF Downloads 121454 Synthesis of Ultra-Small Platinum, Palladium and Gold Nanoparticles by Electrochemically Active Biofilms and Their Enhanced Catalytic Activities
Authors: Elaf Ahmed, Shahid Rasul, Ohoud Alharbi, Peng Wang
Abstract:
Ultra-Small Nanoparticles of metals (USNPs) have attracted the attention from the perspective of both basic and developmental science in a wide range of fields. These NPs exhibit electrical, optical, magnetic, and catalytic phenomena. In addition, they are considered effective catalysts because of their enormously large surface area. Many chemical methods of synthesising USNPs are reported. However, the drawback of these methods is the use of different capping agents and ligands in the process of the production such as Polyvinylpyrrolidone, Thiol and Ethylene Glycol. In this research ultra-small nanoparticles of gold, palladium and platinum metal have been successfully produced using electrochemically active biofilm (EAB) after optimising the pH of the media. The production of ultra-small nanoparticles has been conducted in a reactor using a simple two steps method. Initially biofilm was grown on the surface of a carbon paper for 7 days using Shewanella Loihica bacteria. Then, biofilm was employed to synthesise platinum, palladium and gold nanoparticles in water using sodium lactate as electron donor without using any toxic chemicals at mild operating conditions. Electrochemically active biofilm oxidise the electron donor and produces electrons in the solution. Since these electrons are a strong reducing agent, they can reduce metal precursors quite effectively and quickly. The As-synthesized ultra-small nanoparticles have a size range between (2-7nm) and showed excellent catalytic activity on the degradation of methyl orange. The growth of metal USNPs is strongly related to the condition of the EAB. Where using low pH for the synthesis was not successful due to the fact that it might affect and destroy the bacterial cells. However, increasing the pH to 7 and 9, led to the successful formation of USNPs. By changing the pH value, we noticed a change in the size range of the produced NPs. The EAB seems to act as a Nano factory for the synthesis of metal nanoparticles by offering a green, sustainable and toxic free synthetic route without the use of any capping agents or ligands and depending only on their respiration pathway.Keywords: electrochemically active biofilm, electron donor, shewanella loihica, ultra-small nanoparticles
Procedia PDF Downloads 1921453 Characterization of Alloyed Grey Cast Iron Quenched and Tempered for a Smooth Roll Application
Authors: Mohamed Habireche, Nacer E. Bacha, Mohamed Djeghdjough
Abstract:
In the brick industry, smooth double roll crusher is used for medium and fine crushing of soft to medium hard material. Due to opposite inward rotation of the rolls, the feed material is nipped between the rolls and crushed by compression. They are subject to intense wear, known as three-body abrasion, due to the action of abrasive products. The production downtime affecting productivity stems from two sources: the bi-monthly rectification of the roll crushers and their replacement when they are completely worn out. Choosing the right material for the roll crushers should result in longer machine cycles, and reduced repair and maintenance costs. All roll crushers are imported from outside Algeria. This results in sometimes very long delivery times which handicap the brickyards, in particular in respecting delivery times and honored the orders made by customers. The aim of this work is to investigate the effect of alloying additions on microstructure and wear behavior of grey lamellar cast iron for smooth roll crushers in brick industry. The base gray iron was melted in an induction furnace with low frequency at a temperature of 1500 °C, in which return cast iron scrap, new cast iron ingot, and steel scrap were added to the melt to generate the desired composition. The chemical analysis of the bar samples was carried out using Emission Spectrometer Systems PV 8050 Series (Philips) except for the carbon, for which a carbon/sulphur analyser Elementrac CS-i was used. Unetched microstructure was used to evaluate the graphite flake morphology using the image comparison measurement method. At least five different fields were selected for quantitative estimation of phase constituents. The samples were observed under X100 magnification with a Zeiss Axiover T40 MAT optical microscope equipped with a digital camera. SEM microscope equipped with EDS was used to characterize the phases present in the microstructure. The hardness (750 kg load, 5mm diameter ball) was measured with a Brinell testing machine for both treated and as-solidified condition test pieces. The test bars were used for tensile strength and metallographic evaluations. Mechanical properties were evaluated using tensile specimens made as per ASTM E8 standards. Two specimens were tested for each alloy. From each rod, a test piece was made for the tensile test. The results showed that the quenched and tempered alloys had best wear resistance at 400 °C for alloyed grey cast iron (containing 0.62%Mn, 0.68%Cr, and 1.09% Cu) due to fine carbides in the tempered matrix. In quenched and tempered condition, increasing Cu content in cast irons improved its wear resistance moderately. Combined addition of Cu and Cr increases hardness and wear resistance for a quenched and tempered hypoeutectic grey cast iron.Keywords: casting, cast iron, microstructure, heat treating
Procedia PDF Downloads 1041452 Lake of Neuchatel: Effect of Increasing Storm Events on Littoral Transport and Coastal Structures
Authors: Charlotte Dreger, Erik Bollaert
Abstract:
This paper presents two environmentally-friendly coastal structures realized on the Lake of Neuchâtel. Both structures reflect current environmental issues of concern on the lake and have been strongly affected by extreme meteorological conditions between their period of design and their actual operational period. The Lake of Neuchatel is one of the biggest Swiss lakes and measures around 38 km in length and 8.2 km in width, for a maximum water depth of 152 m. Its particular topographical alignment, situated in between the Swiss Plateau and the Jura mountains, combines strong winds and large fetch values, resulting in significant wave heights during storm events at both north-east and south-west lake extremities. In addition, due to flooding concerns, historically, lake levels have been lowered by several meters during the Jura correction works in the 19th and 20th century. Hence, during storm events, continuous erosion of the vulnerable molasse shorelines and sand banks generate frequent and abundant littoral transport from the center of the lake to its extremities. This phenomenon does not only cause disturbances of the ecosystem, but also generates numerous problems at natural or man-made infrastructures located along the shorelines, such as reed plants, harbor entrances, canals, etc. A first example is provided at the southwestern extremity, near the city of Yverdon, where an ensemble of 11 small islands, the Iles des Vernes, have been artificially created in view of enhancing biological conditions and food availability for bird species during their migration process, replacing at the same time two larger islands that were affected by lack of morphodynamics and general vegetalization of their surfaces. The article will present the concept and dimensioning of these islands based on 2D numerical modelling, as well as the realization and follow-up campaigns. In particular, the influence of several major storm events that occurred immediately after the works will be pointed out. Second, a sediment retention dike is discussed at the northeastern extremity, at the entrance of the Canal de la Broye into the lake. This canal is heavily used for navigation and suffers from frequent and significant sedimentation at its outlet. The new coastal structure has been designed to minimize sediment deposits around the exutory of the canal into the lake, by retaining the littoral transport during storm events. The article will describe the basic assumptions used to design the dike, as well as the construction works and follow-up campaigns. Especially the huge influence of changing meteorological conditions on the littoral transport of the Lake of Neuchatel since project design ten years ago will be pointed out. Not only the intensity and frequency of storm events are increasing, but also the main wind directions alter, affecting in this way the efficiency of the coastal structure in retaining the sediments.Keywords: meteorological evolution, sediment transport, lake of Neuchatel, numerical modelling, environmental measures
Procedia PDF Downloads 841451 Risk Factors Associated with Increased Emergency Department Visits and Hospital Admissions Among Child and Adolescent Patients
Authors: Lalanthica Yogendran, Manassa Hany, Saira Pasha, Benjamin Chaucer, Simarpreet Kaur, Christopher Janusz
Abstract:
Children and adolescent patients visit the Psychiatric Emergency Department (ED) for multiple reasons. Visiting the Psychiatric ED itself can be a traumatic experience that can affect an adolescents mental well-being, regardless of a history of mental illness. Despite this, limited research exists in this domain. Prospective studies have correlated adverse psychosocial determinants among adolescents to risk factors for poor well-being and unfavorable behavior outcomes. Studies have also shown that physiological stress is a contributor in the development of health problems and an increase in substance abuse in adolescents. This study aimed to retrospectively determine which psychosocial factors are associated with an increase in psychiatric ED visits. 600 charts of patients who had a psychiatric ED and inpatient admission visit from January 2014 through December 2014 were reviewed. Sociodemographics, diagnoses, ED visits and inpatient admissions were collected. Descriptive statistics, chi-square tests and independent t-test analyses were utilized to examine differences in the sample to determine which factors affected ED visits and admissions. The sample was 50% female, 35.2% self-identified black, and had a mean age of 13 years. The majority, 85%, went to public school and 17% were in special education. Attention Deficit Hyperactivity Disorder was the most common admitting diagnosis, found in 132(23%) responders. Most patients came from single parent household 305 (53%). The mean ages of patients that were sexually active, with legal issues, and reporting marijuana substance abuse were 15, 14.35, and 15 years respectively. Patients from two biological parent households had significantly fewer ED visits (1.2 vs. 1.7, p < 0.01) and admissions (0.09 vs. 0.26, p < 0.01). Among social factors, those who reported sexual, physical or emotional abuse had a significantly greater number of ED visits (2.1 vs. 1.5, p < 0.01) and admissions (0.61 vs. 0.14, p < 0.01) than those who did not. Patients that were sexually active or had legal issues or substance abuse with marijuana had a significantly greater number of admissions (0.43 vs. 0.17, p < 0.01), (0.54 vs. .18, p < 0.01) and (0.46 vs. 0.18, p < 0.01) respectively. This data supports the theory of the stability of a two parent home. Dual parenting plays a role in creating a safe space where a child can develop; this is shown by subsequent decreases in psychiatric ED visits and admissions. This may highlight the psychological protective role of a two parent household. Abuse can exacerbate existing psychiatric illness or initiate the onset of new disease. Substance abuse and legal issues result in early induction to the criminal system. Results show that this causes an increase in frequency of visits and severity of symptoms. Only marijuana, but not other illicit substances, correlated with higher incidence of psychiatric ED visits. This may speak to the psychotropic nature of tetrahydrocannabinols and their role in mental illness. This study demonstrates the array of psychosocial factors that lead to increased ED visits and admissions in children and adolescents.Keywords: adolescent, child psychiatry, emergency department, substance abuse
Procedia PDF Downloads 3321450 Health Monitoring of Composite Pile Construction Using Fiber Bragg Gratings Sensor Arrays
Authors: B. Atli-Veltin, A. Vosteen, D. Megan, A. Jedynska, L. K. Cheng
Abstract:
Composite materials combine the advantages of being lightweight and possessing high strength. This is in particular of interest for the development of large constructions, e.g., aircraft, space applications, wind turbines, etc. One of the shortcomings of using composite materials is the complex nature of the failure mechanisms which makes it difficult to predict the remaining lifetime. Therefore, condition and health monitoring are essential for using composite material for critical parts of a construction. Different types of sensors are used/developed to monitor composite structures. These include ultrasonic, thermography, shearography and fiber optic. The first 3 technologies are complex and mostly used for measurement in laboratory or during maintenance of the construction. Optical fiber sensor can be surface mounted or embedded in the composite construction to provide the unique advantage of in-operation measurement of mechanical strain and other parameters of interest. This is identified to be a promising technology for Structural Health Monitoring (SHM) or Prognostic Health Monitoring (PHM) of composite constructions. Among the different fiber optic sensing technologies, Fiber Bragg Grating (FBG) sensor is the most mature and widely used. FBG sensors can be realized in an array configuration with many FBGs in a single optical fiber. In the current project, different aspects of using embedded FBG for composite wind turbine monitoring are investigated. The activities are divided into two parts. Firstly, FBG embedded carbon composite laminate is subjected to tensile and bending loading to investigate the response of FBG which are placed in different orientations with respect to the fiber. Secondly, the demonstration of using FBG sensor array for temperature and strain sensing and monitoring of a 5 m long scale model of a glass fiber mono-pile is investigated. Two different FBG types are used; special in-house fibers and off-the-shelf ones. The results from the first part of the study are showing that the FBG sensors survive the conditions during the production of the laminate. The test results from the tensile and the bending experiments are indicating that the sensors successfully response to the change of strain. The measurements from the sensors will be correlated with the strain gauges that are placed on the surface of the laminates.Keywords: Fiber Bragg Gratings, embedded sensors, health monitoring, wind turbine towers
Procedia PDF Downloads 2421449 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 681448 Consideration for a Policy Change to the South African Collective Bargaining Process: A Reflection on National Union of Metalworkers of South Africa v Trenstar (Pty) (2023) 44 ILJ 1189 (CC)
Authors: Carlos Joel Tchawouo Mbiada
Abstract:
At the back of the apartheid era, South Africa embarked on a democratic drive of all its institution underpinned by a social justice perspective to eradicate past injustices. These democratic values based on fundamental human rights and equality informed all rights enshrined in the Constitution of the Republic of South Africa, 1996. This means that all rights are therefore infused by social justice perspective and labour rights are no exception. Labour law is therefore regulated to the extent that it is viewed as too rigid. Hence a call for more flexibility to enhance investment and boost job creation. This view articulated by the Free Market Foundation fell on deaf ears as the opponents believe in what is termed regulated flexibility which affords greater protection to vulnerable workers while promoting business opportunities and investment. The question that this paper seeks to examine is to what extent the regulation of labour law will go to protect employees. This question is prompted by the recent Constitutional Court’s judgment of National Union of Metalworkers of South Africa v Trenstar which barred the employer from employing labour replacement in response to the strike action by its employees. The question whether employers may use replacement labour and have recourse to lock-outs in response to strike action is considered in the context of the dichotomy between the Free market foundation and social justice perspectives which are at loggerheads in the South African collective bargaining process. With the current unemployment rate soaring constantly, the aftermath of the Covid 19 pandemic, the effects of the war in Ukraine and lately the financial burden of load shedding on companies to run their businesses, this paper argues for a policy shift toward deregulation or a lesser state and judiciary intervention. This initiative will relieve the burden on companies to run a viable business while at the same time protecting existing jobs.Keywords: labour law, replacement labour, right to strike, free market foundation perspective, social justice perspective
Procedia PDF Downloads 1021447 Influence of Cryo-Grinding on Particle Size Distribution of Proso Millet Bran Fraction
Authors: Maja Benkovic, Dubravka Novotni, Bojana Voucko, Duska Curic, Damir Jezek, Nikolina Cukelj
Abstract:
Cryo-grinding is an ultra-fine grinding method used in the pharmaceutical industry, production of herbs and spices and in the production and handling of cereals, due to its ability to produce powders with small particle sizes which maintain their favorable bioactive profile. The aim of this study was to determine the particle size distributions of the proso millet (Panicum miliaceum) bran fraction grinded at cryogenic temperature (using liquid nitrogen (LN₂) cooling, T = - 196 °C), in comparison to non-cooled grinding. Proso millet bran is primarily used as an animal feed, but has a potential in food applications, either as a substrate for extraction of bioactive compounds or raw material in the bakery industry. For both applications finer particle sizes of the bran could be beneficial. Thus, millet bran was ground for 2, 4, 8 and 12 minutes using the ball mill (CryoMill, Retsch GmbH, Haan, Germany) at three grinding modes: (I) without cooling, (II) at cryo-temperature, and (III) at cryo-temperature with included 1 minute of intermediate cryo-cooling step after every 2 minutes of grinding, which is usually applied when samples require longer grinding times. The sample was placed in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm). The oscillation frequency in all three modes was 30 Hz. Particle size distributions of the bran were determined by a laser diffraction particle sizing method (Mastersizer 2000) using the Scirocco 2000 dry dispersion unit (Malvern Instruments, Malvern, UK). Three main effects of the grinding set-up were visible from the results. Firstly, grinding time at all three modes had a significant effect on all particle size parameters: d(0.1), d(0.5), d(0.9), D[3,2], D[4,3], span and specific surface area. Longer grinding times resulted in lower values of the above-listed parameters, e.g. the averaged d(0.5) of the sample (229.57±1.46 µm) dropped to 51.29±1.28 µm after 2 minutes grinding without LN₂, and additionally to 43.00±1.33 µm after 4 minutes of grinding without LN₂. The only exception was the sample ground for 12 minutes without cooling, where an increase in particle diameters occurred (d(0.5)=62.85±2.20 µm), probably due to particles adhering to one another and forming larger particle clusters. Secondly, samples with LN₂ cooling exhibited lower diameters in comparison to non-cooled. For example, after 8 minutes of non-cooled grinding d(0.5)=46.97±1.05 µm was achieved, while the LN₂ cooling enabled collection of particles with average sizes of d(0.5)=18.57±0.18 µm. Thirdly, the application of intermediate cryo-cooling step resulted in similar particle diameters (d(0.5)=15.83±0.36 µm, 12 min of grinding) as cryo-milling without this step (d(0.5)=16.33±2.09 µm, 12 min of grinding). This indicates that intermediate cooling is not necessary for the current application, which consequently reduces the consumption of LN₂. These results point out the potential beneficial effects of millet bran grinding at cryo-temperatures. Further research will show if the lower particle size achieved in comparison to non-cooled grinding could result in increased bioavailability of bioactive compounds, as well as protein digestibility and solubility of dietary fibers of the proso millet bran fraction.Keywords: ball mill, cryo-milling, particle size distribution, proso millet (Panicum miliaceum) bran
Procedia PDF Downloads 1441446 Plethora of Drivers Transforming Colonial Cities: The Case of Allahabad
Authors: Akanksha Gupta, Vishal Dubey
Abstract:
In the Neoliberal era, there has been a much-talked discourse about urban issues that arise from a narrow approach of the single rationality of market-driven planning in Indian cities. More to this, India's urban planning is already jeopardized by the captious shortage of infrastructure, a cluster of incoherent governing bodies and implementation mechanism, leading cities to lie in the plethora of urban challenges. In this context, Allahabad (now known as Prayagraj) a city in North India is not an exception. Once known as the most planned splendid Colonial city of the British regime in India collapsed phenomenally because of the incompetent approach of planning machinery, straightforward market-driven accession and lack of attention on urban equity and sustainability. Particularly Civil Lines a Colonial neighbourhood, reached to the zenith of the glorified legacy of the Colonial era, transformed into filthy and congested urban form. Contextually this study contemplates and assesses the chronological episodes of major changes in land management reforms and policies under the ad hoc approach of political economy and land use planning which radically degraded the living environment in the present context. This study would empirically showcase the selected sample area detailing some of the major consequences in terms of gradual change in urban morphology, land use, and function. Here the method of study is primarily a qualitative study implying oral history and other historical methods to exhibit the idiom of planning conundrum. This subsequently reflects the repercussions translated into major issues like unclear land titles, encroachment, and unauthorized development and mushrooming of informal and squatter settlements. In nutshell, the study seeks to distinct out the limitations of the land reform and land management policies, which impacted the general degradation to the beautiful setting of Colonial neighbourhood. The Colonial legacy of Civil Lines now exists in the traces of history- memories of people, who once took pride in its serenity have now witnessed the transformation bit by bit till neo-liberal market forces completely swallow it.Keywords: civil lines, land reforms, policies, urban challenges
Procedia PDF Downloads 1161445 Building Information Modeling Acting as Protagonist and Link between the Virtual Environment and the Real-World for Efficiency in Building Production
Authors: Cristiane R. Magalhaes
Abstract:
Advances in Information and Communication Technologies (ICT) have led to changes in different sectors particularly in architecture, engineering, construction, and operation (AECO) industry. In this context, the advent of BIM (Building Information Modeling) has brought a number of opportunities in the field of the digital architectural design process bringing integrated design concepts that impact on the development, elaboration, coordination, and management of ventures. The project scope has begun to contemplate, from its original stage, the third dimension, by means of virtual environments (VEs), composed of models containing different specialties, substituting the two-dimensional products. The possibility to simulate the construction process of a venture in a VE starts at the beginning of the design process offering, through new technologies, many possibilities beyond geometrical digital modeling. This is a significant change and relates not only to form, but also to how information is appropriated in architectural and engineering models and exchanged among professionals. In order to achieve the main objective of this work, the Design Science Research Method will be adopted to elaborate an artifact containing strategies for the application and use of ICTs from BIM flows, with pre-construction cut-off to the execution of the building. This article intends to discuss and investigate how BIM can be extended to the site acting as a protagonist and link between the Virtual Environments and the Real-World, as well as its contribution to the integration of the value chain and the consequent increase of efficiency in the production of the building. The virtualization of the design process has reached high levels of development through the use of BIM. Therefore it is essential that the lessons learned with the virtual models be transposed to the actual building production increasing precision and efficiency. Thus, this paper discusses how the Fourth Industrial Revolution has impacted on property developments and how BIM could be the propellant acting as the main fuel and link between the virtual environment and the real production for the structuring of flows, information management and efficiency in this process. The results obtained are partial and not definite up to the date of this publication. This research is part of a doctoral thesis development, which focuses on the discussion of the impact of digital transformation in the construction of residential buildings in Brazil.Keywords: building information modeling, building production, digital transformation, ICT
Procedia PDF Downloads 1201444 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1821443 Feasibility Study for Implementation of Geothermal Energy Technology as a Means of Thermal Energy Supply for Medium Size Community Building
Authors: Sreto Boljevic
Abstract:
Heating systems based on geothermal energy sources are becoming increasingly popular among commercial/community buildings as management of these buildings looks for a more efficient and environmentally friendly way to manage the heating system. The thermal energy supply of most European commercial/community buildings at present is provided mainly by energy extracted from natural gas. In order to reduce greenhouse gas emissions and achieve climate change targets set by the EU, restructuring in the area of thermal energy supply is essential. At present, heating and cooling account for approx... 50% of the EU primary energy supply. Due to its physical characteristics, thermal energy cannot be distributed or exchange over long distances, contrary to electricity and gas energy carriers. Compared to electricity and the gas sectors, heating remains a generally black box, with large unknowns to a researcher and policymaker. Ain literature number of documents address policies for promoting renewable energy technology to facilitate heating for residential/community/commercial buildings and assess the balance between heat supply and heat savings. Ground source heat pump (GSHP) technology has been an extremely attractive alternative to traditional electric and fossil fuel space heating equipment used to supply thermal energy for residential/community/commercial buildings. The main purpose of this paper is to create an algorithm using an analytical approach that could enable a feasibility study regarding the implementation of GSHP technology in community building with existing fossil-fueled heating systems. The main results obtained by the algorithm will enable building management and GSHP system designers to define the optimal size of the system regarding technical, environmental, and economic impacts of the system implementation, including payback period time. In addition, an algorithm is created to be utilized for a feasibility study for many different types of buildings. The algorithm is tested on a building that was built in 1930 and is used as a church located in Cork city. The heating of the building is currently provided by a 105kW gas boiler.Keywords: GSHP, greenhouse gas emission, low-enthalpy, renewable energy
Procedia PDF Downloads 2181442 Fostering Organizational Learning across the Canadian Sport System through Leadership and Mentorship Development of Sport Science Leaders
Authors: Jennifer Walinga, Samantha Heron
Abstract:
The goal of the study was to inform the design of effective leadership and mentorship development programming for sport science leaders within the network of Canadian sport institutes and centers. The LEAD (Learn, Engage, Accelerate, Develop) program was implemented to equip sport science leaders with the leadership knowledge, skills, and practice to foster a high - performance culture, enhance the daily training environment, and contribute to optimal performance in sport. After two years of delivery, this analysis of LEAD’s effect on individual and organizational health and performance factors informs the quality of future deliveries and identifies best practice for leadership development across the Canadian sport system and beyond. A larger goal for this project was to inform the public sector more broadly and position sport as a source of best practice for human and social health, development, and performance. The objectives of this study were to review and refine the LEAD program in collaboration with Canadian Sport Institute and Centre leaders, 40-50 participants from three cohorts, and the LEAD program advisory committee, and to trace the effects of the LEAD leadership development program on key leadership mentorship and organizational health indicators across the Canadian sport institutes and centers so as to capture best practice. The study followed a participatory action research framework (PAR) using semi structured interviews with sport scientist participants, program and institute leaders inquiring into impact on specific individual and organizational health and performance factors. Findings included a strong increase in self-reported leadership knowledge, skill, language and confidence, enhancement of human and organizational health factors, and the opportunity to explore more deeply issues of diversity and inclusion, psychological safety, team dynamics, and performance management. The study was significant in building sport leadership and mentorship development strategies for managing change efforts, addressing inequalities, and building personal and operational resilience amidst challenges of uncertainty, pressure, and constraint in real time.Keywords: sport leadership, sport science leader, leadership development, professional development, sport education, mentorship
Procedia PDF Downloads 201441 Ground Water Pollution Investigation around Çorum Stream Basin in Turkey
Authors: Halil Bas, Unal Demiray, Sukru Dursun
Abstract:
Water and ground water pollution at the most of the countries is important problem. Investigation of water pollution source must be carried out to save fresh water. Because fresh water sources are very limited and recent sources are not enough for increasing population of world. In this study, investigation was carried out on pollution factors effecting the quality of the groundwater in Çorum Stream Basin in Turkey. Effect of geological structure of the region and the interaction between the stream and groundwater was researched. For the investigation, stream and groundwater sampling were performed at rainy and dry seasons to see if there is a change on quality parameters. The results were evaluated by the computer programs and then graphics, distribution maps were prepared. Thus, degree of the quality and pollution were tried to understand. According to analysis results, because the results of streams and the ground waters are not so close to each other we can say that there is no interaction between the stream and the groundwater. As the irrigation water, the stream waters are generally in the range between C3S1 region and the ground waters are generally in the range between C3S1 and C4S2 regions according to US Salinity Laboratory Diagram. According to Wilcox diagram stream waters are generally good-permissible and ground waters are generally good permissible, doubtful to unsuitable and unsuitable type. Especially ground waters are doubtful to unsuitable and unsuitable types in dry season. It may be assumed that as the result of relative increase in concentration of salt minerals. Especially samples from groundwater wells bored close to gypsium bearing units have high hardness, electrical conductivity and salinity values. Thus for drinking and irrigation these waters are determined as unsuitable. As a result of these studies, it is understood that the groundwater especially was effected by the lithological contamination rather than the anthropogenic or the other types of pollution. Because the alluvium is covered by the silt and clay lithology it is not affected by the anthropogenic and the other foreign factors. The results of solid waste disposal site leachate indicate that this site would have a risk potential for pollution in the future. Although the parameters did not exceed the maximum dangerous values it does not mean that they will not be dangerous in the future, and this case must be taken into account.Keywords: Çorum, environment, groundwater, hydrogeology, geology, pollution, quality, stream
Procedia PDF Downloads 4991440 Proposals for the Practical Implementation of the Biological Monitoring of Occupational Exposure for Antineoplastic Drugs
Authors: Mireille Canal-Raffin, Nadege Lepage, Antoine Villa
Abstract:
Context: Most antineoplastic drugs (AD) have a potential carcinogenic, mutagenic and/or reprotoxic effect and are classified as 'hazardous to handle' by National Institute for Occupational Safety and Health Their handling increases with the increase of cancer incidence. AD contamination from workers who handle AD and/or care for treated patients is, therefore, a major concern for occupational physicians. As part of the process of evaluation and prevention of chemical risks for professionals exposed to AD, Biological Monitoring of Occupational Exposure (BMOE) is the tool of choice. BMOE allows identification of at-risk groups, monitoring of exposures, assessment of poorly controlled exposures and the effectiveness and/or wearing of protective equipment, and documenting occupational exposure incidents to AD. This work aims to make proposals for the practical implementation of the BMOE for AD. The proposed strategy is based on the French good practice recommendations for BMOE, issued in 2016 by 3 French learned societies. These recommendations have been adapted to occupational exposure to AD. Results: AD contamination of professionals is a sensitive topic, and the BMOE requires the establishment of a working group and information meetings within the concerned health establishment to explain the approach, objectives, and purpose of monitoring. Occupational exposure to AD is often discontinuous and 2 steps are essential upstream: a study of the nature and frequency of AD used to select the Biological Exposure Indice(s) (BEI) most representative of the activity; a study of AD path in the institution to target exposed professionals and to adapt medico-professional information sheet (MPIS). The MPIS is essential to gather the necessary elements for results interpretation. Currently, 28 urinary specific BEIs of AD exposure have been identified, and corresponding analytical methods have been published: 11 BEIs were AD metabolites, and 17 were AD. Results interpretation is performed by groups of homogeneous exposure (GHE). There is no threshold biological limit value of interpretation. Contamination is established when an AD is detected in trace concentration or in a urine concentration equal or greater than the limit of quantification (LOQ) of the analytical method. Results can only be compared to LOQs of these methods, which must be as low as possible. For 8 of the 17 AD BEIs, the LOQ is very low with values between 0.01 to 0.05µg/l. For the other BEIs, the LOQ values were higher between 0.1 to 30µg/l. Results restitution by occupational physicians to workers should be individual and collective. Faced with AD dangerousness, in cases of workers contamination, it is necessary to put in place corrective measures. In addition, the implementation of prevention and awareness measures for those exposed to this risk is a priority. Conclusion: This work is a help for occupational physicians engaging in a process of prevention of occupational risks related to AD exposure. With the current analytical tools, effective and available, the (BMOE) to the AD should now be possible to develop in routine occupational physician practice. The BMOE may be complemented by surface sampling to determine workers' contamination modalities.Keywords: antineoplastic drugs, urine, occupational exposure, biological monitoring of occupational exposure, biological exposure indice
Procedia PDF Downloads 135