Search results for: groundwater flow and contaminant transport modeling
836 Lung Function, Urinary Heavy Metals And ITS Other Influencing Factors Among Community In Klang Valley
Authors: Ammar Amsyar Abdul Haddi, Mohd Hasni Jaafar
Abstract:
Heavy metals are elements naturally presented in the environment that can cause adverse effect to health. But not much literature was found on effects toward lung function, where impairment of lung function may lead to various lung diseases. The objective of the study is to explore the lung function impairment, urinary heavy metal level, and its associated factors among the community in Klang valley, Malaysia. Sampling was done in Kuala Lumpur suburb public and housing areas during community events throughout March 2019 till October 2019. respondents who gave the consent were given a questionnaire to answer and was proceeded with a lung function test. Urine samples were obtained at the end of the session and sent for Inductively coupled plasma mass spectrometry (ICP-MS) analysis for heavy metal cadmium (Cd) and lead (Pb) concentration. A total of 200 samples were analysed, and of all, 52% of respondents were male, Age ranging from 18 years old to 74 years old with a mean age of 38.44. Urinary samples show that 12% of the respondent (n=22) has Cd level above than average, and 1.5 % of the respondent (n=3) has urinary Pb at an above normal level. Bivariate analysis show that there was a positive correlation between urinary Cd and urinary Pb (r= 0.309; p<0.001). Furthermore, there was a negative correlation between urinary Cd level and full vital capacity (FVC) (r=-0.202, p=0.004), Force expiratory volume at 1 second (FEV1) (r = -0.225, p=0.001), and also with Force expiratory flow between 25-75% FVC (FEF25%-75%) (r= -0.187, p=0.008). however, urinary Pb did not show any association with FVC, FEV1, FEV1/FVC, or FEF25%-75%. Multiple linear regression analysis shows that urinary Cd remained significant and negatively affect FVC% (p=0.025) and FEV1% (p=0.004) achieved from the predicted value. On top of that, other factors such as education level (p=0.013) and duration of smoking(p=0.003) may influencing both urinary Cd and performance in lung function as well, suggesting Cd as a potential mediating factor between smoking and impairment of lung function. however, there was no interaction detected between heavy metal or other influencing factor in this study. In short, there is a negative linear relationship detected between urinary Cd and lung function, and urinary Cd is likely to affects lung function in a restrictive pattern. Since smoking is also an influencing factor for urinary Cd and lung function impairment, it is highly suggested that smokers should be screened for lung function and urinary Cd level in the future for early disease prevention.Keywords: lung function, heavy metals, community
Procedia PDF Downloads 156835 Scanning Transmission Electron Microscopic Analysis of Gamma Ray Exposed Perovskite Solar Cells
Authors: Aleksandra Boldyreva, Alexander Golubnichiy, Artem Abakumov
Abstract:
Various perovskite materials have surprisingly high resistance towards high-energy electrons, protons, and hard ionization, such as X-rays and gamma-rays. Superior radiation hardness makes a family of perovskite semiconductors an attractive candidate for single- and multijunction solar cells for the space environment and as X-ray and gamma-ray detectors. One of the methods to study the radiation hardness of different materials is by exposing them to gamma photons with high energies (above 500 keV) Herein, we have explored the recombination dynamics and defect concentration of a mixed cation mixed halide perovskite Cs0.17FA0.83PbI1.8Br1.2 with 1.74 eV bandgap after exposure to a gamma-ray source (2.5 Gy/min). We performed an advanced STEM EDX analysis to reveal different types of defects formed during gamma exposure. It was found that 10 kGy dose results in significant improvement of perovskite crystallinity and homogeneous distribution of I ions. While the absorber layer withstood gamma exposure, the hole transport layer (PTAA) as well as indium tin oxide (ITO) were significantly damaged, which increased the interface recombination rate and reduction of fill factor in solar cells. Thus, STEM analysis is a powerful technique that can reveal defects formed by gamma exposure in perovskite solar cells. Methods: Data will be collected from perovskite solar cells (PSCs) and thin films exposed to gamma ionisator. For thin films 50 μL of the Cs0.17FA0.83PbI1.8Br1.2 solution in DMF was deposited (dynamically) at 3000 rpm followed by quenching with 100 μL of ethyl acetate (dropped 10 sec after perovskite precursor) applied at the same spin-coating frequency. The deposited Cs0.17FA0.83PbI1.8Br1.2 films were annealed for 10 min at 100 °C, which led to the development of a dark brown color. For the solar cells, 10% suspension of SnO2 nanoparticles (Alfa Aesar) was deposited at 4000 rpm, followed by annealing on air at 170 ˚C for 20 min. Next, samples were introduced into a nitrogen glovebox for the deposition of all remaining layers. Perovskite film was applied in the same way as in thin films described earlier. Solution of poly-triaryl amine PTAA (Sigma Aldrich) (4 mg in chlorobenzene) was applied at 1000 rpm atop of perovskite layer. Next, 30 nm of VOx was deposited atop the PTAA layer on the whole sample surface using the physical vapor deposition (PVD) technique. Silver electrodes (100 nm) were evaporated in a high vacuum (10-6 mbar) through a shadow mask, defining the active area of each device as ~0.16 cm2. The prepared samples (thin films and solar cells) were packed in Al lamination foil inside the argon glove box. The set of samples consisted of 6 thin films and 6 solar cells, which were exposed to 6, 10, and 21 kGy (2 samples per dose) with 137Cs gamma-ray source (E = 662 keV) with a dose rate of 2.5 Gy/min. The exposed samples will be studied on a focused ion beam (FIB) on a dual-beam scanning electron microscope from ThermoFisher, the Helios G4 Plasma FIB Uxe, operating with a xenon plasma.Keywords: perovskite solar cells, transmission electron microscopy, radiation hardness, gamma irradiation
Procedia PDF Downloads 24834 Liquid Unloading of Wells with Scaled Perforation via Batch Foamers
Authors: Erwin Chan, Aravind Subramaniyan, Siti Abdullah Fatehah, Steve Lian Kuling
Abstract:
Foam assisted lift technology is proven across the industry to provide efficient deliquification in gas wells. Such deliquification is typically achieved by delivering the foamer chemical downhole via capillary strings. In highly liquid loaded wells where capillary strings are not readily available, foamer can be delivered via batch injection or bull-heading. The latter techniques differ from the former in that cap strings allow for liquid to be unloaded continuously, whereas foamer batches require that periodic batching be conducted for the liquid to be unloaded. Although batch injection allows for liquid to be unloaded in wells with suitable water to gas (WGR) ratio and condensate to gas (CGR) ratio without well intervention for capillary string installation, this technique comes with its own set of challenges - for foamer to de-liquify liquids, the chemical needs to reach perforation locations where gas bubbling is observed. In highly scaled perforation zones in certain wells, foamer delivered in batches is unable to reach the gas bubbling zone, thus achieving poor lift efficiency. This paper aims to discuss the techniques and challenges for unloading liquid via batch injection in scaled perforation wells X and Y, whose WGR is 6bbl/MMscf, whose scale build-up is observed at the bottom of perforation interval, whose water column is 400 feet, and whose ‘bubbling zone’ is less than 100 feet. Variables such as foamer Z dosage, batching technique, and well flow control valve opening times are manipulated during the duration of the trial to achieve maximum liquid unloading and gas rates. During the field trial, the team has found optimal values between the three aforementioned parameters for best unloading results, in which each cycle’s gas and liquid rates are compared with baselines with similar flowing tubing head pressures (FTHP). It is discovered that amongst other factors, a good agitation technique is a primary determinant for efficient liquid unloading. An average increment of 2MMscf/d against an average production of 4MMscf/d at stable FTHP is recorded during the trial.Keywords: foam, foamer, gas lift, liquid unloading, scale, batch injection
Procedia PDF Downloads 184833 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 65832 The Kindergarten as a Multicultural Workplace
Authors: Monika Haanpää
Abstract:
Well-functioning workplaces are often characterized by good co-operation, adequate flow of information, open interaction between workers and a supportive work environment. The workplace is a mosaic of human personalities and the influx of people, who speak different languages and who are from different cultural backgrounds, may bring about new challenges and enrich this environment. However, this influx of people could also pose a problem as the adaptation of immigrant people to new terms of work may depend heavily on the level of language skills, the stage of culture shock, professional identity, and personality. Migration is not a rare phenomenon in Finland anymore; nobody is surprised to see people from different countries and different backgrounds in the schools, on the streets or in shops. However, this does not mean that immigration is an easy process for people coming from other countries. The experience of workers, with diverse language and backgrounds, has rarely been researched, particularly from the superior's point of view. In addition, the vast majority of researchers have paid more attention to multicultural kindergartens in terms of immigrant children and their families. Hence, there is a need to show the problem which exists in the recruitment of the increasing number of workers who come from different countries. Opinions about kindergartens, as multicultural workplaces, have been gathered through interviews with immigrant workers responsible for education. In addition, a questionnaire for native Finnish workers and superiors in kindergartens was carried out. The collected material has been analyzed qualitatively, focusing on topics such as: the kindergarten as a multicultural workplace, factors influencing career success of workers with diverse language and cultural backgrounds, the social relations in the multicultural workplaces and teachers’ changing professional identity. The results of the research provided a novel aspect of the multicultural workplace and emphasized a dependency of immigrant workers’ on language skills in Finnish; affecting professional success. In addition, they showed the good relations between other native Finnish co-workers and superiors. The results also illustrate why writing skills in Finnish are so important in kindergartens. Part of the investigation also questions some results of the research i.e. which is more important in the kindergarten as a multicultural workplace: personality, good professional skills or good language skills.Keywords: kindergarten, multicultural workplace, social relations at work, work satisfaction
Procedia PDF Downloads 271831 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement
Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas
Abstract:
The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor
Procedia PDF Downloads 91830 Is Materiality Determination the Key to Integrating Corporate Sustainability and Maximising Value?
Authors: Ruth Hegarty, Noel Connaughton
Abstract:
Sustainability reporting has become a priority for many global multinational companies. This is associated with ever-increasing expectations from key stakeholders for companies to be transparent about their strategies, activities and management with regard to sustainability issues. The Global Reporting Initiative (GRI) encourages reporters to only provide information on the issues that are really critical in order to achieve the organisation’s goals for sustainability and manage its impact on environment and society. A key challenge for most reporting organisations is how to identify relevant issues for sustainability reporting and prioritise those material issues in accordance with company and stakeholder needs. A recent study indicates that most of the largest companies listed on the world’s stock exchanges are failing to provide data on key sustainability indicators such as employee turnover, energy, greenhouse gas emissions (GHGs), injury rate, pay equity, waste and water. This paper takes an indepth look at the approaches used by a select number of international sustainability leader corporates to identify key sustainability issues. The research methodology involves performing a detailed analysis of the sustainability report content of up to 50 companies listed on the 2014 Dow Jones Sustainability Indices (DJSI). The most recent sustainability report content found on the GRI Sustainability Disclosure Database is then compared with 91 GRI Specific Standard Disclosures and a small number of GRI Standard Disclosures. Preliminary research indicates significant gaps in the information disclosed in corporate sustainability reports versus the indicator content specified in the GRI Content Index. The following outlines some of the key findings to date: Most companies made a partial disclosure with regard to the Economic indicators of climate change risks and infrastructure investments, but did not focus on the associated negative impacts. The top Environmental indicators disclosed were energy consumption and reductions, GHG emissions, water withdrawals, waste and compliance. The lowest rates of indicator disclosure included biodiversity, water discharge, mitigation of environmental impacts of products and services, transport, environmental investments, screening of new suppliers and supply chain impacts. The top Social indicators disclosed were new employee hires, rates of injury, freedom of association in operations, child labour and forced labour. Lesser disclosure rates were reported for employee training, composition of governance bodies and employees, political contributions, corruption and fines for non-compliance. The reporting on most other Social indicators was found to be poor. In addition, most companies give only a brief explanation on how material issues are defined, identified and ranked. Data on the identification of key stakeholders and the degree and nature of engagement for determining issues and their weightings is also lacking. Generally, little to no data is provided on the algorithms used to score an issue. Research indicates that most companies lack a rigorous and thorough methodology to systematically determine the material issues of sustainability reporting in accordance with company and stakeholder needs.Keywords: identification of key stakeholders, material issues, sustainability reporting, transparency
Procedia PDF Downloads 306829 A Study on the Magnetic and Submarine Geology Structure of TA22 Seamount in Lau Basin, Tonga
Authors: Soon Young Choi, Chan Hwan Kim, Chan Hong Park, Hyung Rae Kim, Myoung Hoon Lee, Hyeon-Yeong Park
Abstract:
We performed the marine magnetic, bathymetry and seismic survey at the TA22 seamount (in the Lau basin, SW Pacific) for finding the submarine hydrothermal deposits in October 2009. We acquired magnetic and bathymetry data sets by suing Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.), Multi-beam Echo Sounder EM120 (Kongsberg Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduction to the pole (RTP) and magnetization. Based on the magnetic properties result, we analyzed submarine geology structure of TA22 seamount with post-processed seismic profile. The detailed bathymetry of the TA22 seamount showed the left and right crest parts that have caldera features in each crest central part. The magnetic anomaly distribution of the TA22 seamount regionally displayed high magnetic anomalies in northern part and the low magnetic anomalies in southern part around the caldera features. The RTP magnetic anomaly distribution of the TA22 seamount presented commonly high magnetic anomalies in the each caldera central part. Also, it represented strong anomalies at the inside of caldera rather than outside flank of the caldera. The magnetization distribution of the TA22 seamount showed the low magnetization zone in the center of each caldera, high magnetization zone in the southern and northern east part. From analyzed the seismic profile map, The TA22 seamount area is showed for the inferred small mounds inside each caldera central part and it assumes to make possibility of sills by the magma in cases of the right caldera. Taking into account all results of this study (bathymetry, magnetic anomaly, RTP, magnetization, seismic profile) with rock samples at the left caldera area in 2009 survey, we suppose the possibility of hydrothermal deposits at mounds in each caldera central part and at outside flank of the caldera representing the low magnetization zone. We expect to have the better results by combined modeling from this study data with the other geological data (ex. detailed gravity, 3D seismic, petrologic study results and etc).Keywords: detailed bathymetry, magnetic anomaly, seamounts, seismic profile, SW Pacific
Procedia PDF Downloads 403828 Implementing a Comprehensive Emergency Care and Life Support Course in a Low- and Middle-Income Country Setting: A Survey of Learners in India
Authors: Vijayabhaskar Reddy Kandula, Peter Provost Taillac, Balasubramanya M. A., Ram Krishnan Nair, Gokul Toshnival, Vibhu Dhawan, Vijaya Karanam, Buffy Cramer
Abstract:
Introduction: The lack of Emergency Care Services (ECS) is a cause of extensive and serious public health problems in low- and middle-income countries (LMIC), Many LMIC countries have ambulance services that allow timely transfer of ill patients but due to poor care during the ‘Golden Hour’ many deaths occur which are otherwise preventable. Lack of adequate training as evidenced by a study in India is a major reason for poor care during the ‘Golden Hour’. Adapting developed country models which includes staffing specialty-trained doctors in emergency care, is neither feasible nor guarantees cost-effective ECS. Methods: Based on our assessment and felt needs by first-line doctors providing emergency care in 2014, Rajiv Gandhi Health Sciences University’s JeevaRaksha Trust in partnership with the University of Utah, USA, designed, piloted and successfully implemented a 4-day Comprehensive-Emergency Care and Life Support course (C-ECLS) for allopathic doctors. 1730 doctors completed the 4-day course between June 2014 and December- 2020. Subsequently, we conducted a survey to investigate the utilization rates and usefulness of the training. 1662 were contacted but only 309 completed the survey. The respondents had the following designations: Senior faculty (33%), junior faculty (25), Resident (16%), Private-Practitioners (8%), Medical-Officer (16%) and not-working (11%). 51% were generalists (51%) and the rest were specialists (>30 specialties). Results: 97% (271/280) felt they are better doctors because of C-ECLS. 79% (244/309) reported that training helped to save life- specialists more likely than generalists (91% v/s 68%. P<0.05). 64% agreed that they were confident of managing COVID-19 symptomatic patients better because of C-ECLS. 27% (77) were neutral; 9% (24) disagreed. 66% agreed that training helps to be confident in managing COVID-19 critically ill patients. 26% (72) were neutral; 8% (23) disagreed. Frequency of use of C-ECLS skills: Hemorrhage-control (70%), Airway (67%), circulation skills (62%), Safe-transport and communication (60%), managing critically ill patients (58%), cardiac arrest (51%), Trauma (49%), poisoning/animal bites/stings (44%), neonatal-resuscitation (39%), breathing (36%), post-partum-hemorrhage and eclampsia (35%). Among those who used the skills, the majority (ranging from (88%-94%) reported that they were able to apply the skill more effectively because of ECLS training. Conclusion: JeevaRaksha’s C-ECLS is the world’s first comprehensive training. It improves the confidence of front-line doctors and enables them to provide quality care during the ‘Golden Hour’ of emergency. It also prepares doctors to manage unknown emergencies (e.g., COVID-19). C-ECLS was piloted in Morocco, and Uzbekistan and implemented countrywide in Bhutan. C-ECLS is relevant to most settings and offers a replicable model across LMIC.Keywords: comprehensive emergency care and life support, training, capacity building, low- and middle-income countries, developing countries
Procedia PDF Downloads 68827 Reconstruction Paleogeomorphological Map of the Nile River in Upper Egypt by Using Some Geomorphological and Geoarchaeological Indicators
Authors: Magdy Torab
Abstract:
Ancient Egyptians built their temples purposefully close to the River Nile to use it for transporting construction stones from far away quarries to building sites in river-boats. Most temples, therefore, have river-harbors associated with their geometric designs. The paleoriver channel remapped by using this idea, besides other geomorphological and geoarchaeological indicators/evidence located between Aswan and Luxor cities. In this sense, this paper defines the characteristics of this ancient course and its associated landforms using paleochannel morphology, paleomeandering, and ancient river dynamics during historic and prehistoric times. Both geomorphological and geoarchaeological approaches used to reconstruct the paleomorphology of the river course. It helps to investigate the ancient river morphology by using the following techniques: comparison and interpretation of multi dates satellite images and historical maps between 1943 and 2004. The results illustrated on maps using GIS (ARC GIS V.10 software) and the field data collected from the western bank of The Nile River at Luxor area and Karnak, Edfu, Esna and Kom Ombo temples. Created both current and paleogeomorphological maps depending upon the results of geoarchaeological surveying and soil analysis and dating, for surface and subsurface soil sampling by handle auger, laser diffraction analysis for 7 soil samples collected from some mounds and Malkata channel in the western bank of The Nile River near Luxor. Paleo-current directions were determined by using standard Brunton compass to use it as an indicator is evidence for the direction of flow of The Nile River during deposition of some accumulated mounds on the western part of the floodplain near Luxor city. C-14 dating was used for two samples collected from these mounds as well as geographical information system (GIS) technique for mapping. The geomorphological and geoarchaeological evidence shows that the Nile River course in Luxor area was around 4.5 km wide and contained many islands and sandbars which separated inside the river channel, now appearing as scattered mounds inside the floodplain. Upper Egypt has migrated during the historic times to the east up to five kilometers and become far away from the ancient temples, quarries, and harbors. It has also become as well as become more meandering and narrower than before.Keywords: Nile River, ancient harbours, Luxor, paleogeomorphology, geoarchaeology
Procedia PDF Downloads 153826 Geographic Information System and Ecotourism Sites Identification of Jamui District, Bihar, India
Authors: Anshu Anshu
Abstract:
In the red corridor famed for the Left Wing Extremism, lies small district of Jamui in Bihar, India. The district lies at 24º20´ N latitude and 86º13´ E longitude, covering an area of 3,122.8 km2 The undulating topography, with widespread forests provides pristine environment for invigorating experience of tourists. Natural landscape in form of forests, wildlife, rivers, and cultural landscape dotted with historical and religious places is highly purposive for tourism. The study is primarily related to the identification of potential ecotourism sites, using Geographic Information System. Data preparation, analysis and finally identification of ecotourism sites is done. Secondary data used is Survey of India Topographical Sheets with R.F.1:50,000 covering the area of Jamui district. District Census Handbook, Census of India, 2011; ERDAS Imagine and Arc View is used for digitization and the creation of DEM’s (Digital Elevation Model) of the district, depicting the relief and topography and generate thematic maps. The thematic maps have been refined using the geo-processing tools. Buffer technique has been used for the accessibility analysis. Finally, all the maps, including the Buffer maps were overlaid to find out the areas which have potential for the development of ecotourism sites in the Jamui district. Spatial data - relief, slopes, settlements, transport network and forests of Jamui District were marked and identified, followed by Buffer Analysis that was used to find out the accessibility of features like roads, railway stations to the sites available for the development of ecotourism destinations. Buffer analysis is also carried out to get the spatial proximity of major river banks, lakes, and dam sites to be selected for promoting sustainable ecotourism. Overlay Analysis is conducted using the geo-processing tools. Digital Terrain Model (DEM) generated and relevant themes like roads, forest areas and settlements were draped on the DEM to make an assessment of the topography and other land uses of district to delineate potential zones of ecotourism development. Development of ecotourism in Jamui faces several challenges. The district lies in the portion of Bihar that is part of ‘red corridor’ of India. The hills and dense forests are the prominent hideouts and training ground for the extremists. It is well known that any kind of political instability, war, acts of violence directly influence the travel propensity and hinders all kind of non-essential travels to these areas. The development of ecotourism in the district can bring change and overall growth in this area with communities getting more involved in economically sustainable activities. It is a known fact that poverty and social exclusion are the main force that pushes people, resorting towards violence. All over the world tourism has been used as a tool to eradicate poverty and generate good will among people. Tourism, in sustainable form should be promoted in the district to integrate local communities in the development process and to distribute fruits of development with equity.Keywords: buffer analysis, digital elevation model, ecotourism, red corridor
Procedia PDF Downloads 259825 Land Degradation Vulnerability Modeling: A Study on Selected Micro Watersheds of West Khasi Hills Meghalaya, India
Authors: Amritee Bora, B. S. Mipun
Abstract:
Land degradation is often used to describe the land environmental phenomena that reduce land’s original productivity both qualitatively and quantitatively. The study of land degradation vulnerability primarily deals with “Environmentally Sensitive Areas” (ESA) and the amount of topsoil loss due to erosion. In many studies, it is observed that the assessment of the existing status of land degradation is used to represent the vulnerability. Moreover, it is also noticed that in most studies, the primary emphasis of land degradation vulnerability is to assess its sensitivity to soil erosion only. However, the concept of land degradation vulnerability can have different objectives depending upon the perspective of the study. It shows the extent to which changes in land use land cover can imprint their effect on the land. In other words, it represents the susceptibility of a piece of land to degrade its productive quality permanently or in the long run. It is also important to mention that the vulnerability of land degradation is not a single factor outcome. It is a probability assessment to evaluate the status of land degradation and needs to consider both biophysical and human induce parameters. To avoid the complexity of the previous models in this regard, the present study has emphasized on to generate a simplified model to assess the land degradation vulnerability in terms of its current human population pressure, land use practices, and existing biophysical conditions. It is a “Mixed-Method” termed as the land degradation vulnerability index (LDVi). It was originally inspired by the MEDALUS model (Mediterranean Desertification and Land Use), 1999, and Farazadeh’s 2007 revised version of it. It has followed the guidelines of Space Application Center, Ahmedabad / Indian Space Research Organization for land degradation vulnerability. The model integrates the climatic index (Ci), vegetation index (Vi), erosion index (Ei), land utilization index (Li), population pressure index (Pi), and cover management index (CMi) by giving equal weightage to each parameter. The final result shows that the very high vulnerable zone primarily indicates three (3) prominent circumstances; land under continuous population pressure, high concentration of human settlement, and high amount of topsoil loss due to surface runoff within the study sites. As all the parameters of the model are amalgamated with equal weightage further with the help of regression analysis, the LDVi model also provides a strong grasp of each parameter and how far they are competent to trigger the land degradation process.Keywords: population pressure, land utilization, soil erosion, land degradation vulnerability
Procedia PDF Downloads 167824 Adaptive Power Control of the City Bus Integrated Photovoltaic System
Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker
Abstract:
This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter
Procedia PDF Downloads 211823 Entrepreneurship Development for Socio-Economic Prosperity of Pineapple Growers in Nagaland
Authors: Kaushal Jha
Abstract:
India is one of the major producers of pineapple contributing a significant part in terms of total world production of pineapple. It has spread throughout tropical and subtropical regions as a commercial fruit crop. In India, the cultivation of pineapple is confined to high rainfall and humid coastal region in the peninsular India and hilly areas of Northeastern region of India. Nagaland, one of the potential states of North-East India is basically an agrarian state having been endowed with favourable agro climatic conditions and a rich bio-diversity of flora and fauna. Agriculture contributes significantly to the state’s economy. Pineapple is an important fruit crop grown in Nagaland and has a very high potential for doubling the income of farmers in comparison to the traditional practices of rice cultivation. This requires improved farm management practices as well as a genre of entrepreneurial intentions and capabilities. The present study aimed at analysing the dimensions of entrepreneurial skill development among the pineapple growers of Nagaland. Medziphema block under Dimapur district is considered as the pineapple valley of Nagaland. Pineapple grown in this area is considered as one of the best in Nagaland in terms of its sweetness as well as quality. A multistage sampling was undertaken for conducting the present study. Medziphema rural development block was selected purposively for this purpose. The sample was drawn from three leading pineapple producing villages under Medziphema block. The respondents were selected based on random sampling procedure. Data were collected from the respondents using a pre-tested structured schedule. Major findings revealed that entrepreneurial skill development was one of the important factors to augment the increase in the sustained flow of income among the target farmers. Development of farm leadership, improving self esteem, innovativeness, economic motivation, orientation towards management of farm resources and value addition were identified as important dimensions for promoting entrepreneurial skill development and bringing prosperity to the farmers.Keywords: skill development, entrepreneurial attributes, pineapple growers, Nagaland
Procedia PDF Downloads 161822 Development of Internet of Things (IoT) with Mobile Voice Picking and Cargo Tracing Systems in Warehouse Operations of Third-Party Logistics
Authors: Eugene Y. C. Wong
Abstract:
The increased market competition, customer expectation, and warehouse operating cost in third-party logistics have motivated the continuous exploration in improving operation efficiency in warehouse logistics. Cargo tracing in ordering picking process consumes excessive time for warehouse operators when handling enormous quantities of goods flowing through the warehouse each day. Internet of Things (IoT) with mobile cargo tracing apps and database management systems are developed this research to facilitate and reduce the cargo tracing time in order picking process of a third-party logistics firm. An operation review is carried out in the firm with opportunities for improvement being identified, including inaccurate inventory record in warehouse management system, excessive tracing time on stored products, and product misdelivery. The facility layout has been improved by modifying the designated locations of various types of products. The relationship among the pick and pack processing time, cargo tracing time, delivery accuracy, inventory turnover, and inventory count operation time in the warehouse are evaluated. The correlation of the factors affecting the overall cycle time is analysed. A mobile app is developed with the use of MIT App Inventor and the Access management database to facilitate cargo tracking anytime anywhere. The information flow framework from warehouse database system to cloud computing document-sharing, and further to the mobile app device is developed. The improved performance on cargo tracing in the order processing cycle time of warehouse operators have been collected and evaluated. The developed mobile voice picking and tracking systems brings significant benefit to the third-party logistics firm, including eliminating unnecessary cargo tracing time in order picking process and reducing warehouse operators overtime cost. The mobile tracking device is further planned to enhance the picking time and cycle count of warehouse operators with voice picking system in the developed mobile apps as future development.Keywords: warehouse, order picking process, cargo tracing, mobile app, third-party logistics
Procedia PDF Downloads 374821 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method
Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota
Abstract:
The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.Keywords: BOS method, underexpanded jet, abel transformation, density field visualization
Procedia PDF Downloads 78820 Cancer Burden and Policy Needs in the Democratic Republic of the Congo: A Descriptive Study
Authors: Jean Paul Muambangu Milambo, Peter Nyasulu, John Akudugu, Leonidas Ndayisaba, Joyce Tsoka-Gwegweni, Lebwaze Massamba Bienvenu, Mitshindo Mwambangu Chiro
Abstract:
In 2018, non-communicable diseases (NCDs) were responsible for 48% of deaths in the Democratic Republic of Congo (DRC), with cancer contributing to 5% of these deaths. There is a notable absence of cancer registries, capacity-building activities, budgets, and treatment roadmaps in the DRC. Current cancer estimates are primarily based on mathematical modeling with limited data from neighboring countries. This study aimed to assess cancer subtype prevalence in Kinshasa hospitals and compare these findings with WHO model estimates. Methods: A retrospective observational study was conducted from 2018 to 2020 at HJ Hospitals in Kinshasa. Data were collected using American Cancer Society (ACS) questionnaires and physician logs. Descriptive analysis was performed using STATA version 16 to estimate cancer burden and provide evidence-based recommendations. Results: The results from the chart review at HJ Hospitals in Kinshasa (2018-2020) indicate that out of 6,852 samples, approximately 11.16% were diagnosed with cancer. The distribution of cancer subtypes in this cohort was as follows: breast cancer (33.6%), prostate cancer (21.8%), colorectal cancer (9.6%), lymphoma (4.6%), and cervical cancer (4.4%). These figures are based on histopathological confirmation at the facility and may not fully represent the broader population due to potential selection biases related to geographic and financial accessibility to the hospital. In contrast, the World Health Organization (WHO) model estimates for cancer prevalence in the DRC show different proportions. According to WHO data, the distribution of cancer types is as follows: cervical cancer (15.9%), prostate cancer (15.3%), breast cancer (14.9%), liver cancer (6.8%), colorectal cancer (5.9%), and other cancers (41.2%) (WHO, 2020). Conclusion: The data indicate a rising cancer prevalence in DRC but highlight significant gaps in clinical, biomedical, and genetic cancer data. The establishment of a population-based cancer registry (PBCR) and a defined cancer management pathway is crucial. The current estimates are limited due to data scarcity and inconsistencies in clinical practices. There is an urgent need for multidisciplinary cancer management, integration of palliative care, and improvement in care quality based on evidence-based measures.Keywords: cancer, risk factors, DRC, gene-environment interactions, survivors
Procedia PDF Downloads 21819 Application of Geotube® Method for Sludge Handling in Adaro Coal Mine
Authors: Ezman Fitriansyah, Lestari Diah Restu, Wawan
Abstract:
Adaro coal mine in South Kalimantan-Indonesia maintains catchment area of approximately 15,000 Ha for its mine operation. As an open pit surface coal mine with high erosion rate, the mine water in Adaro coal mine contains high TSS that needs to be treated before being released to rivers. For the treatment process, Adaro operates 21 Settling Ponds equipped with combination of physical and chemical system to separate solids and water to ensure the discharged water complied with regional environmental quality standards. However, the sludge created from the sedimentation process reduces the settling ponds capacity gradually. Therefore regular maintenance activities are required to recover and maintain the ponds' capacity. Trucking system and direct dredging had been the most common method to handle sludge in Adaro. But the main problem in applying these two methods is excessive area required for drying pond construction. To solve this problem, Adaro implements an alternative method called Geotube®. The principle of Geotube® method is the sludge contained in the Settling Ponds is pumped into Geotube® containers which have been designed to release water and retain mud flocks. During the pumping process, an amount of flocculants chemicals are injected into the sludge to form bigger mud flocks. Due to the difference in particle size, the mud flocks are settled in the container whilst the water continues to flow out through the container’s pores. Compared to the trucking system and direct dredging method, this method provides three advantages: space required to operate, increasing of overburden waste dump volume, and increasing of water treatment process speed and quality. Based on the evaluation result, Geotube® method only needs 1:8 of space required by the other methods. From the geotechnical assessment result conducted by Adaro, the potential loss of waste dump volume capacity prior to implementation of the Geotube® method was 26.7%. The water treatment process of TSS in well maintained ponds is 16% more optimum.Keywords: geotube, mine water, settling pond, sludge handling, wastewater treatment
Procedia PDF Downloads 200818 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers
Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta
Abstract:
The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation
Procedia PDF Downloads 62817 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study
Authors: Mohamed H. Khalil
Abstract:
Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.Keywords: GIS Web-Based, base-map, water network, decision support system
Procedia PDF Downloads 96816 The Characteristics of the Operating Parameters of the Vertical Axis Wind Turbine for the Selected Wind Speed
Authors: Zdzislaw Kaminski, Zbigniew Czyz
Abstract:
The paper discusses the results of the research into a wind turbine with a vertical axis of rotation which was performed with the open return wind tunnel, Gunt HM 170, at the laboratory of the Department of Thermodynamics, Fluid Mechanics and Propulsion Aviation Systems of Lublin University of Technology. Wind tunnel experiments are a necessary step to construct any new type of wind turbine, to validate design assumptions and numerical results. This research focused on the rotor with the blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on adjusting angular aperture α of the top and bottom parts of the blades mounted on an axis. If this angle α increases, the working surface which absorbs wind kinetic energy also increases. The study was performed on scaled and geometrically similar models with the criteria of similarity relevant for the type of research preserved. The rotors with varied angular apertures of their blades were printed for the research with a powder 3D printer, ZPrinter® 450. This paper presents the research results for the selected flow speed of 6.5 m/s for the three angular apertures of the rotor blades, i.e. 30°, 60°, 90° at varied speeds. The test stand enables the turbine rotor to be braked to achieve the required speed and airflow speed and torque to be recorded. Accordingly, the torque and power as a function of airflow were plotted. The rotor with its adjustable blades enables turbine power to be adjusted within a wide range of wind speeds. A variable angular aperture of blade working surfaces α in a wind turbine enables us to control the speed of the turbine and consequently its output power. Reducing the angular aperture of working surfaces results in reduced speed, and if a special current generator applied, electrical output power is reduced, too. Speed adjusted by changing angle α enables the maximum load acting on rotor blades to be controlled. The solution under study is a kind of safety against a damage of a turbine due to possible high wind speed.Keywords: drive torque, renewable energy, power, wind turbine, wind tunnel
Procedia PDF Downloads 258815 Virtual Metrology for Copper Clad Laminate Manufacturing
Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho
Abstract:
In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology
Procedia PDF Downloads 350814 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 363813 Cfd Simulation for Urban Environment for Evaluation of a Wind Energy Potential of a Building or a New Urban Planning
Authors: David Serero, Loic Couton, Jean-Denis Parisse, Robert Leroy
Abstract:
This paper presents an analysis method of airflow at the periphery of several typologies of architectural volumes. To understand the complexity of the urban environment on the airflows in the city, we compared three sites at different architectural scale. The research sets a method to identify the optimal location for the installation of wind turbines on the edges of a building and to achieve an improvement in the performance of energy extracted by precise localization of an accelerating wing called “aero foil”. The objective is to define principles for the installation of wind turbines and natural ventilation design of buildings. Instead of theoretical winds analysis, we combined numerical aeraulic simulations using STAR CCM + software with wind data, over long periods of time (greater than 1 year). If airflows computer fluid analysis (CFD) simulation of buildings are current, we have calibrated a virtual wind tunnel with wind data using in situ anemometers (to establish localized cartography of urban winds). We can then develop a complete volumetric model of the behavior of the wind on a roof area, or an entire urban island. With this method, we can categorize: - the different types of wind in urban areas and identify the minimum and maximum wind spectrum, - select the type of harvesting devices - fixing to the roof of a building, - the altimetry of the device in relation to the levels of the roofs - The potential nuisances around. This study is carried out from the recovery of a geolocated data flow, and the connection of this information with the technical specifications of wind turbines, their energy performance and their speed of engagement. Thanks to this method, we can thus define the characteristics of wind turbines to maximize their performance in urban sites and in a turbulent airflow regime. We also study the installation of a wind accelerator associated with buildings. The “aerofoils which are integrated are improvement to control the speed of the air, to orientate it on the wind turbine, to accelerate it and to hide, thanks to its profile, the device on the roof of the building.Keywords: wind energy harvesting, wind turbine selection, urban wind potential analysis, CFD simulation for architectural design
Procedia PDF Downloads 150812 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 87811 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures
Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.
Abstract:
Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.Keywords: CFD, reacting flow, DDT, gas explosion
Procedia PDF Downloads 90810 Effect of Ramp Rate on the Preparation of Activated Carbon from Saudi Date Tree Fronds (Agro Waste) by Physical Activation Method
Authors: Muhammad Shoaib, Hassan M Al-Swaidan
Abstract:
Saudi Arabia is the major date producer in the world. In order to maximize the production from date tree, pruning of the date trees is required annually. Large amount of this agriculture waste material (palm tree fronds) is available in Saudi Arabia and considered as an ideal source as a precursor for production of activated carbon (AC). The single step procedure for the preparation of micro porous activated carbon (AC) from Saudi date tree fronds using mixture of gases (N2 and CO2) is carried out at carbonization/activation temperature at 850°C and at different ramp rates of 10, 20 and 30 degree per minute. Alloy 330 horizontal reactor is used for tube furnace. Flow rate of nitrogen and carbon dioxide gases are kept at 150 ml/min and 50 ml/min respectively during the preparation. Characterization results reveal that the BET surface area, pore volume, and average pore diameter of the resulting activated carbon generally decreases with the increase in ramp rate. The activated carbon prepared at a ramp rate of 10 degrees/minute attains larger surface area and can offer higher potential to produce activated carbon of greater adsorption capacity from agriculture wastes such as date fronds. The BET surface areas of the activated carbons prepared at a ramp rate of 10, 20 and 30 degree/minute after 30 minutes activation time are 1094, 1020 and 515 m2/g, respectively. Scanning electron microscopy (SEM) for surface morphology, and FTIR for functional groups was carried out that also verified the same trend. Moreover, by increasing the ramp rate from 10 and 20 degrees/min the yield remains same, i.e. 18%, whereas at a ramp rate of 30 degrees/min the yield increases from 18 to 20%. Thus, it is feasible to produce high-quality micro porous activated carbon from date frond agro waste using N2 carbonization followed by physical activation with CO2 and N2 mixture. This micro porous activated carbon can be used as adsorbent of heavy metals from wastewater, NOx SOx emission adsorption from ambient air and electricity generation plants, purification of gases, sewage treatment and many other applications.Keywords: activated carbon, date tree fronds, agricultural waste, applied chemistry
Procedia PDF Downloads 278809 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System
Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu
Abstract:
Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model
Procedia PDF Downloads 111808 Biomimicked Nano-Structured Coating Elaboration by Soft Chemistry Route for Self-Cleaning and Antibacterial Uses
Authors: Elodie Niemiec, Philippe Champagne, Jean-Francois Blach, Philippe Moreau, Anthony Thuault, Arnaud Tricoteaux
Abstract:
Hygiene of equipment in contact with users is an important issue in the railroad industry. The numerous cleanings to eliminate bacteria and dirt cost a lot. Besides, mechanical solicitations on contact parts are observed daily. It should be interesting to elaborate on a self-cleaning and antibacterial coating with sufficient adhesion and good resistance against mechanical and chemical solicitations. Thus, a Hauts-de-France and Maubeuge Val-de-Sambre conurbation authority co-financed Ph.D. thesis has been set up since October 2017 based on anterior studies carried by the Laboratory of Ceramic Materials and Processing. To accomplish this task, a soft chemical route has been implemented to bring a lotus effect on metallic substrates. It involves nanometric liquid zinc oxide synthesis under 100°C. The originality here consists in a variation of surface texturing by modification of the synthesis time of the species in solution. This helps to adjust wettability. Nanostructured zinc oxide has been chosen because of the inherent photocatalytic effect, which can activate organic substance degradation. Two methods of heating have been compared: conventional and microwave assistance. Tested subtracts are made of stainless steel to conform to transport uses. Substrate preparation was the first step of this protocol: a meticulous cleaning of the samples is applied. The main goal of the elaboration protocol is to fix enough zinc-based seeds to make them grow during the next step as desired (nanorod shaped). To improve this adhesion, a silica gel has been formulated and optimized to ensure chemical bonding between substrate and zinc seeds. The last step consists of deposing a wide carbonated organosilane to improve the superhydrophobic property of the coating. The quasi-proportionality between the reaction time and the nanorod length will be demonstrated. Water Contact (superior to 150°) and Roll-off Angle at different steps of the process will be presented. The antibacterial effect has been proved with Escherichia Coli, Staphylococcus Aureus, and Bacillus Subtilis. The mortality rate is found to be four times superior to a non-treated substrate. Photocatalytic experiences were carried out from different dyed solutions in contact with treated samples under UV irradiation. Spectroscopic measurements allow to determinate times of degradation according to the zinc quantity available on the surface. The final coating obtained is, therefore, not a monolayer but rather a set of amorphous/crystalline/amorphous layers that have been characterized by spectroscopic ellipsometry. We will show that the thickness of the nanostructured oxide layer depends essentially on the synthesis time set in the hydrothermal growth step. A green, easy-to-process and control coating with self-cleaning and antibacterial properties has been synthesized with a satisfying surface structuration.Keywords: antibacterial, biomimetism, soft-chemistry, zinc oxide
Procedia PDF Downloads 142807 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge
Authors: K. Eryilmaz, G. Mercanoglu
Abstract:
Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis
Procedia PDF Downloads 162