Search results for: accelerator neutron source
250 Energy Efficiency of Secondary Refrigeration with Phase Change Materials and Impact on Greenhouse Gases Emissions
Authors: Michel Pons, Anthony Delahaye, Laurence Fournaison
Abstract:
Secondary refrigeration consists of splitting large-size direct-cooling units into volume-limited primary cooling units complemented by secondary loops for transporting and distributing cold. Such a design reduces the refrigerant leaks, which represents a source of greenhouse gases emitted into the atmosphere. However, inserting the secondary circuit between the primary unit and the ‘users’ heat exchangers (UHX) increases the energy consumption of the whole process, which induces an indirect emission of greenhouse gases. It is thus important to check whether that efficiency loss is sufficiently limited for the change to be globally beneficial to the environment. Among the likely secondary fluids, phase change slurries offer several advantages: they transport latent heat, they stabilize the heat exchange temperature, and the formerly evaporators still can be used as UHX. The temperature level can also be adapted to the desired cooling application. Herein, the slurry {ice in mono-propylene-glycol solution} (melting temperature Tₘ of 6°C) is considered for food preservation, and the slurry {mixed hydrate of CO₂ + tetra-n-butyl-phosphonium-bromide in aqueous solution of this salt + CO₂} (melting temperature Tₘ of 13°C) is considered for air conditioning. For the sake of thermodynamic consistency, the analysis encompasses the whole process, primary cooling unit plus secondary slurry loop, and the various properties of the slurries, including their non-Newtonian viscosity. The design of the whole process is optimized according to the properties of the chosen slurry and under explicit constraints. As a first constraint, all the units must deliver the same cooling power to the user. The other constraints concern the heat exchanges areas, which are prescribed, and the flow conditions, which prevent deposition of the solid particles transported in the slurry, and their agglomeration. Minimization of the total energy consumption leads to the optimal design. In addition, the results are analyzed in terms of exergy losses, which allows highlighting the couplings between the primary unit and the secondary loop. One important difference between the ice-slurry and the mixed-hydrate one is the presence of gaseous carbon dioxide in the latter case. When the mixed-hydrate crystals melt in the UHX, CO₂ vapor is generated at a rate that depends on the phase change kinetics. The flow in the UHX, and its heat and mass transfer properties are significantly modified. This effect has never been investigated before. Lastly, inserting the secondary loop between the primary unit and the users increases the temperature difference between the refrigerated space and the evaporator. This results in a loss of global energy efficiency, and therefore in an increased energy consumption. The analysis shows that this loss of efficiency is not critical in the first case (Tₘ = 6°C), while the second case leads to more ambiguous results, partially because of the higher melting temperature.The consequences in terms of greenhouse gases emissions are also analyzed.Keywords: exergy, hydrates, optimization, phase change material, thermodynamics
Procedia PDF Downloads 131249 Characterisation, Extraction of Secondary Metabolite from Perilla frutescens for Therapeutic Additives: A Phytogenic Approach
Authors: B. M. Vishal, Monamie Basu, Gopinath M., Rose Havilah Pulla
Abstract:
Though there are several methods of synthesizing silver nano particles, Green synthesis always has its own dignity. Ranging from the cost-effectiveness to the ease of synthesis, the process is simplified in the best possible way and is one of the most explored topics. This study of extracting secondary metabolites from Perilla frutescens and using them for therapeutic additives has its own significance. Unlike the other researches that have been done so far, this study aims to synthesize Silver nano particles from Perilla frutescens using three available forms of the plant: leaves, seed, and commercial leaf extract powder. Perilla frutescens, commonly known as 'Beefsteak Plant', is a perennial plant and belongs to the mint family. The plant has two varieties classed within itself. They are frutescens crispa and frutescens frutescens. The species, frutescens crispa (commonly known as 'Shisho' in Japanese), is generally used for edible purposes. Its leaves occur in two forms, varying on the colors. It is found in two different colors of red with purple streaks and green with crinkly pattern on it. This species is aromatic due to the presence of two major compounds: polyphenols and perillaldehyde. The red (purple streak) variety of this plant is due to the presence of a pigment, Perilla anthocyanin. The species, frutescens frutescens (commonly known as 'Egoma' in Japanese), is the main source for perilla oil. This species is also aromatic, but in this case, the major compound which gives the aroma is Perilla ketone or egoma ketone. Shisho grows short as compared with Wild Sesame and both produce seeds. The seeds of Wild Sesame are large and soft whereas that of Shisho is small and hard. The seeds have a large proportion of lipids, ranging about 38-45 percent. Excluding those, the seeds have a large quantity of Omega-3 fatty acids, linoleic acid, and an Omega-6 fatty acid. Other than these, Perilla leaf extract has gold and silver nano particles in it. The yield comparison in all the cases have been done, and the process’ optimal conditions were modified, keeping in mind the efficiencies. The characterization of secondary metabolites includes GC-MS and FTIR which can be used to identify the components of purpose that actually helps in synthesizing silver nano particles. The analysis of silver was done through a series of characterization tests that include XRD, UV-Vis, EDAX, and SEM. After the synthesis, for being used as therapeutic additives, the toxin analysis was done, and the results were tabulated. The synthesis of silver nano particles was done in a series of multiple cycles of extraction from leaves, seeds and commercially purchased leaf extract. The yield and efficiency comparison were done to bring out the best and the cheapest possible way of synthesizing silver nano particles using Perilla frutescens. The synthesized nano particles can be used in therapeutic drugs, which has a wide range of application from burn treatment to cancer treatment. This will, in turn, replace the traditional processes of synthesizing nano particles, as this method will prove effective in terms of cost and the environmental implications.Keywords: nanoparticles, green synthesis, Perilla frutescens, characterisation, toxin analysis
Procedia PDF Downloads 233248 Geographic Information Systems and a Breath of Opportunities for Supply Chain Management: Results from a Systematic Literature Review
Authors: Anastasia Tsakiridi
Abstract:
Geographic information systems (GIS) have been utilized in numerous spatial problems, such as site research, land suitability, and demographic analysis. Besides, GIS has been applied in scientific fields like geography, health, and economics. In business studies, GIS has been used to provide insights and spatial perspectives in demographic trends, spending indicators, and network analysis. To date, the information regarding the available usages of GIS in supply chain management (SCM) and how these analyses can benefit businesses is limited. A systematic literature review (SLR) of the last 5-year peer-reviewed academic literature was conducted, aiming to explore the existing usages of GIS in SCM. The searches were performed in 3 databases (Web of Science, ProQuest, and Business Source Premier) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. The analysis resulted in 79 papers. The results indicate that the existing GIS applications used in SCM were in the following domains: a) network/ transportation analysis (in 53 of the papers), b) location – allocation site search/ selection (multiple-criteria decision analysis) (in 45 papers), c) spatial analysis (demographic or physical) (in 34 papers), d) combination of GIS and supply chain/network optimization tools (in 32 papers), and e) visualization/ monitoring or building information modeling applications (in 8 papers). An additional categorization of the literature was conducted by examining the usage of GIS in the supply chain (SC) by the business sectors, as indicated by the volume of the papers. The results showed that GIS is mainly being applied in the SC of the biomass biofuel/wood industry (33 papers). Other industries that are currently utilizing GIS in their SC were the logistics industry (22 papers), the humanitarian/emergency/health care sector (10 papers), the food/agro-industry sector (5 papers), the petroleum/ coal/ shale gas sector (3 papers), the faecal sludge sector (2 papers), the recycle and product footprint industry (2 papers), and the construction sector (2 papers). The results were also presented by the geography of the included studies and the GIS software used to provide critical business insights and suggestions for future research. The results showed that research case studies of GIS in SCM were conducted in 26 countries (mainly in the USA) and that the most prominent GIS software provider was the Environmental Systems Research Institute’s ArcGIS (in 51 of the papers). This study is a systematic literature review of the usage of GIS in SCM. The results showed that the GIS capabilities could offer substantial benefits in SCM decision-making by providing key insights to cost minimization, supplier selection, facility location, SC network configuration, and asset management. However, as presented in the results, only eight industries/sectors are currently using GIS in their SCM activities. These findings may offer essential tools to SC managers who seek to optimize the SC activities and/or minimize logistic costs and to consultants and business owners that want to make strategic SC decisions. Furthermore, the findings may be of interest to researchers aiming to investigate unexplored research areas where GIS may improve SCM.Keywords: supply chain management, logistics, systematic literature review, GIS
Procedia PDF Downloads 142247 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle
Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito
Abstract:
Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks
Procedia PDF Downloads 67246 Mobile Genetic Elements in Trematode Himasthla Elongata Clonal Polymorphism
Authors: Anna Solovyeva, Ivan Levakin, Nickolai Galaktionov, Olga Podgornaya
Abstract:
Animals that reproduce asexually were thought to have the same genotypes within generations for a long time. However, some refuting examples were found, and mobile genetic elements (MGEs) or transposons are considered to be the most probable source of genetic instability. Dispersed nature and the ability to change their genomic localization enables MGEs to be efficient mutators. Hence the study of MGEs genomic impact requires an appropriate object which comprehends both representative amounts of various MGEs and options to evaluate the genomic influence of MGEs. Animals that reproduce asexually seem to be a decent model to study MGEs impact in genomic variability. We found a small marine trematode Himasthla elongata (Himasthlidae) to be a good model for such investigation as it has a small genome size, diverse MGEs and parthenogenetic stages in the lifecycle. In the current work, clonal diversity of cercaria was traced with an AFLP (Amplified fragment length polymorphism) method, diverse zones from electrophoretic patterns were cloned, and the nature of the fragments explored. Polymorphic patterns of individual cercariae AFLP-based fingerprints are enriched with retrotransposons of different families. The bulk of those sequences are represented by open reading frames of non-Long Terminal Repeats containing elements(non-LTR) yet Long-Terminal Repeats containing elements (LTR), to a lesser extent in variable figments of AFLP array. The CR1 elements expose both in polymorphic and conservative patterns are remarkably more frequent than the other non-LTR retrotransposons. This data was confirmed with shotgun sequencing-based on Illumina HiSeq 2500 platform. Individual cercaria of the same clone (i.e., originated from a single miracidium and inhabiting one host) has a various distribution of MGE families detected in sequenced AFLP patterns. The most numerous are CR1 and RTE-Bov retrotransposons, typical for trematode genomes. Also, we identified LTR-retrotransposons of Pao and Gypsy families among DNA transposons of CMC-EnSpm, Tc1/Mariner, MuLE-MuDR and Merlin families. We detected many of them in H. elongata transcriptome. Such uneven MGEs distribution in AFLP sequences’ sets reflects the different patterns of transposons spreading in cercarial genomes as transposons affect the genome in many ways (ectopic recombination, gene structure interruption, epigenetic silencing). It is considered that they play a key role in the origins of trematode clonal polymorphism. The authors greatly appreciate the help received at the Kartesh White Sea Biological Station of the Russian Academy of Sciences Zoological Institute. This work is funded with RSF 19-74-20102 and RFBR 17-04-02161 grants and the research program of the Zoological Institute of the Russian Academy of Sciences (project number AAAA-A19-119020690109-2).Keywords: AFLP, clonal polymorphism, Himasthla elongata, mobile genetic elements, NGS
Procedia PDF Downloads 124245 Carbon Footprint Assessment and Application in Urban Planning and Geography
Authors: Hyunjoo Park, Taehyun Kim, Taehyun Kim
Abstract:
Human life, activity, and culture depend on the wider environment. Cities offer economic opportunities for goods and services, but cannot exist in environments without food, energy, and water supply. Technological innovation in energy supply and transport speeds up the expansion of urban areas and the physical separation from agricultural land. As a result, division of urban agricultural areas causes more energy demand for food and goods transport between the regions. As the energy resources are leaking all over the world, the impact on the environment crossing the boundaries of cities is also growing. While advances in energy and other technologies can reduce the environmental impact of consumption, there is still a gap between energy supply and demand by current technology, even in technically advanced countries. Therefore, reducing energy demand is more realistic than relying solely on the development of technology for sustainable development. The purpose of this study is to introduce the application of carbon footprint assessment in fields of urban planning and geography. In urban studies, carbon footprint has been assessed at different geographical scales, such as nation, city, region, household, and individual. Carbon footprint assessment for a nation and a city is available by using national or city level statistics of energy consumption categories. By means of carbon footprint calculation, it is possible to compare the ecological capacity and deficit among nations and cities. Carbon footprint also offers great insight on the geographical distribution of carbon intensity at a regional level in the agricultural field. The study shows the background of carbon footprint applications in urban planning and geography by case studies such as figuring out sustainable land-use measures in urban planning and geography. For micro level, footprint quiz or survey can be adapted to measure household and individual carbon footprint. For example, first case study collected carbon footprint data from the survey measuring home energy use and travel behavior of 2,064 households in eight cities in Gyeonggi-do, Korea. Second case study analyzed the effects of the net and gross population densities on carbon footprint of residents at an intra-urban scale in the capital city of Seoul, Korea. In this study, the individual carbon footprint of residents was calculated by converting the carbon intensities of home and travel fossil fuel use of respondents to the unit of metric ton of carbon dioxide (tCO₂) by multiplying the conversion factors equivalent to the carbon intensities of each energy source, such as electricity, natural gas, and gasoline. Carbon footprint is an important concept not only for reducing climate change but also for sustainable development. As seen in case studies carbon footprint may be measured and applied in various spatial units, including but not limited to countries and regions. These examples may provide new perspectives on carbon footprint application in planning and geography. In addition, additional concerns for consumption of food, goods, and services can be included in carbon footprint calculation in the area of urban planning and geography.Keywords: carbon footprint, case study, geography, urban planning
Procedia PDF Downloads 288244 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators
Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy
Abstract:
Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators
Procedia PDF Downloads 113243 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 18242 Persistent Organic Pollutant Level in Challawa River Basin of Kano State, Nigeria
Authors: Abdulkadir Sarauta
Abstract:
Almost every type of industrial process involves the release of trace quantity of toxic organic and inorganic compound that up in receiving water bodies, this study was aimed at assessing the Persistent Organic Pollutant Level in Challawa River Basin of Kano State, Nigeria. And the research formed the basis of identifying the presence of PCBs and PAHs in receiving water bodies in the study area, assessing the PCBs and PAHs concentration in receiving water body of Challawa system, evaluate the concentration level of PCBs and PAHs in fishes in the study area, determine the concentration level of PCBs and PAHs in crops irrigated in the study area as well as compare the concentration of PCBs and PAHs with the acceptable limit set by Nigerian, EU, U.S and WHO standard. Data were collected using reconnaissance survey, site inspection, field survey, laboratory experiment as well as secondary data source. A total of 78 samples were collected through stratified systematic random sampling (i.e., 26 samples for each of water, crops and fish) three sampling points were chosen and designated A, B and C along the stretch of the river (i.e. up, middle, and downstream) from Yan Danko Bridge to Tambirawa bridge. The result shows that the Polychlorinated biphenyls (PCBs) was not detected while, polycyclic aromatic hydrocarbons (PAHs) was detected in the whole samples analysed at the trench of Challawa River basin in order to assess the contribution of human activities to global environmental pollution. The total concentrations of ΣPAH and ΣPCB ranges between 0.001 to 0.087mg/l and 0.00 to 0.00mg/l of water samples While, crops samples ranges between 2.0ppb to 8.1ppb and fish samples ranges from 2.0 to 6.7ppb.The whole samples are polluted because most of the parameters analyzed exceed the threshold limits set by WHO, Nigerian, U.S and EU standard. The analytical results revealed that some chemicals are present in water, crops and fishes are significantly very high at Zamawa village which is very close to Challawa industrial estate and also is main effluent discharge point and drinking water around study area is not potable for consumption. Analysis of Variance was obtained by Bartlett’s test performance. There is only significant difference in water because the P < 0.05 level of significant, But there is no difference in crops concentration they have the same performance, likes wise in the fishes. It is said to be of concern to health hazard which will increase incidence of tumor related diseases such as skin, lungs, bladder, gastrointestinal cancer, this show there is high failure of pollution abatement measures in the area. In conclusion, it can be said that industrial activities and effluent has impact on Challawa River basin and its environs especially those that are living in the immediate surroundings. Arising from the findings of this research some recommendations were made the industries should treat their liquid properly by installing modern treatment plants.Keywords: Challawa River Basin, organic, persistent, pollutant
Procedia PDF Downloads 575241 Adapting an Accurate Reverse-time Migration Method to USCT Imaging
Authors: Brayden Mi
Abstract:
Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation
Procedia PDF Downloads 74240 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar
Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh
Abstract:
Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring
Procedia PDF Downloads 179239 Population Diversity of Dalmatian Pyrethrum Based on Pyrethrin Content and Composition
Authors: Filip Varga, Nina Jeran, Martina Biosic, Zlatko Satovic, Martina Grdisa
Abstract:
Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip.), a species endemic to the eastern Adriatic coastline, is the source of natural insecticide pyrethrin. Pyrethrin is a mixture of six compounds (pyrethrin I and II, cinerin I and II, jasmolin I and II) that exhibits high insecticidal activity with no detrimental effects to the environment. A recently optimized matrix-solid phase dispersion method (MSPD), using florisil as the sorbent, acetone-ethyl acetate (1:1, v/v) as the elution solvent, and sodium sulfate anhydrous as the drying agent was utilized to extract the pyrethrins from 10 wild populations (20 individuals per population) distributed along the Croatian coast. All six components in the extracts were qualitatively and quantitatively determined by high-performance liquid chromatography with a diode array detector (HPLC-DAD). Pearson’s correlation index was calculated between pyrethrin compounds, and differences between the populations using the analysis of variance were tested. Additionally, the correlation of each pyrethrin component with spatio-ecological variables (bioclimate, soil properties, elevation, solar radiation, and distance from the coastline) was calculated. Total pyrethrin content ranged from 0.10% to 1.35% of dry flower weight, averaging 0.58% across all individuals. Analysis of variance revealed significant differences between populations based on all six pyrethrin compounds and total pyrethrin content. On average, the lowest total pyrethrin content was found in the population from Pelješac peninsula (0.22% of dry flower weight) in which total pyrethrin content lower than 0.18% was detected in 55% of the individuals. The highest average total pyrethrin content was observed in the population from island Zlarin (0.87% of dry flower weight), in which total pyrethrin content higher than 1.00% was recorded in only 30% of the individuals. Pyrethrin I/pyrethrin II ratio as a measure of extract quality ranged from 0.21 (population from the island Čiovo) to 5.88 (population from island Mali Lošinj) with an average of 1.77 across all individuals. By far, the lowest quality of extracts was found in the population from Mt. Biokovo (pyrethrin I/II ratio lower than 0.72 in 40% of individuals) due to the high pyrethrin II content typical for this population. Pearson’s correlation index revealed a highly significant positive correlation between pyrethrin I content and total pyrethrin content and a strong negative correlation between pyrethrin I and pyrethrin II. The results of this research clearly indicate high intra- and interpopulation diversity of Dalmatian pyrethrum with regards to pyrethrin content and composition. The information obtained has potential use in plant genetic resources conservation and biodiversity monitoring. Possibly the largest potential lies in designing breeding programs aimed at increasing pyrethrin content in commercial breeding lines and reintroduction in agriculture in Croatia. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).Keywords: Dalmatian pyrethrum, HPLC, MSPD, pyrethrin
Procedia PDF Downloads 142238 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá
Authors: Dayron Camilo Bermudez Mendoza
Abstract:
Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility
Procedia PDF Downloads 58237 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 302236 The Portrayal of Journalists in K-dramas Leaves an Impression on Viewers
Authors: Susan Grantham, Emily S. Kinsky
Abstract:
As the popularity of K-drama viewership increases, the depiction of journalists’ news gathering and distribution behavior in these series can have an impact on viewers’ perceptions of journalism practices in Korea. Studies have shown that viewers are impacted both by their impressions of actual journalists delivering news, as well as by fictional portrayals of journalists they have seen. As mistrust in the media grows internationally, it is important to understand how journalists are viewed. K-dramas are an increasingly popular export and consumed across the globe. In 2021 Netflix had 74 million subscribers in the US/Canadian market, about 36% of its overall subscriber base, with an in- crease of about 16 million new subscribers during the pandemic. A Statista November 2023 survey found that K-dramas are moderately (27%) or very popular (41%). While Hallyu has grown increasingly in the past decade, between 2019 and 2021, viewership numbers for TV series produced in South Korea went up a staggering 200% in the U.S. Additionally, a 2023 KOCCA report about K-drama viewership in the U.S. found that, within the past year, male viewership became nearly equal to female viewership. This study evaluated how viewers perceive journalists and journalistic practices in South Korea as portrayed in eight K-drama series. Six in-depth interviews and two focus groups were conducted to evaluate viewer perceptions of journalism practices as portrayed in K-dramas. This study builds upon two previous research projects: a content analysis of the same eight K-dramas featuring journalists in a primary role and whose journalistic work is pivotal to the plot, followed by subsequent in-depth interviews with South Korean journalists. The K-dramas in the sample featured both print and broadcast journalists. Using clips from these K-drama series that featured journalistic practices, as well as pressure faced by journalists, participants were be asked a series of questions about their impressions of journalists and journalism in South Korea and how realistic they perceived these portrayals to be. The participants were comprised of viewers who frequently watched K-dramas and occasionally/seldom watched K-dramas. The initial findings show that regardless of how frequently the participants watched K-dramas, they indicated that the presentation of the journalists seemed pretty realistic, and that the journalists behaved ethically. Participants felt their portrayal was relatable to their impression of how journalists behaved in the United States. This was also true in terms of the internal pressure shown in the clips toward journalists that featured behavior by the journalists’ supervisors focused on supporting the media company’s political and business positions. The amount of negative feedback toward the journalists from the general public, as shown in the clips, seemed less realistic to the participants. The idea of ‘fake news’ as a function of the news consumer’s own personal beliefs, versus actual misinformation, resonated with the participants. Additional research is being conducted. Because Korea is an important source of news and information in East Asia, it is important to understand the potential perceptions of consumers and how they view journalistic practices in Korea.Keywords: ethical journalism, K-drama, Korean journalists, viewer perceptions
Procedia PDF Downloads 20235 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach
Authors: Jianli Jiang, Bai-Chen Xie
Abstract:
The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.Keywords: spatial network DEA, environmental efficiency, sustainable development, power system
Procedia PDF Downloads 108234 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution
Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko
Abstract:
Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking
Procedia PDF Downloads 73233 Archaeoseismological Evidence for a Possible Destructive Earthquake in the 7th Century AD at the Ancient Sites of Bulla Regia and Chemtou (NW Tunisia): Seismotectonic and Structural Implications
Authors: Abdelkader Soumaya, Noureddine Ben Ayed, Ali Kadri, Said Maouche, Hayet Khayati Ammar, Ahmed Braham
Abstract:
The historic sites of Bulla Regia and Chemtou are among the most important archaeological monuments in northwestern Tunisia, which flourished as large, wealthy settlements during the Roman and Byzantine periods (2nd to 7th centuries AD). An archaeoseismological study provides the first indications about the impact of a possible ancient strong earthquake in the destruction of these cities. Based on previous archaeological excavation results, including numismatic evidence, pottery, economic meltdown and urban transformation, the abrupt ruin and destruction of the cities of Bulla Regia and Chemtou can be bracketed between 613 and 647 AD. In this study, we carried out the first attempt to use the analysis of earthquake archaeological effects (EAEs) that were observed during our field investigations in these two historic cities. The damage includes different types of EAEs: folds on regular pavements, displaced and deformed vaults, folded walls, tilted walls, collapsed keystones in arches, dipping broken corners, displaced-fallen columns, block extrusions in walls, penetrative fractures in brick-made walls and open fractures on regular pavements. These deformations are spread over 10 different sectors or buildings and include 56 measured EAEs. The structural analysis of the identified EAEs can indicate an ancient destructive earthquake that probably destroyed the Bulla Regia and Chemtou archaeological sites. We then analyzed these measurements using structural geological analysis to obtain the maximum horizontal strain of the ground (e.g., S ₕₘₐₓ) on each building-oriented damage. After the collection and analysis of these strain datasets, we proceed to plot the orientation of Sₕₘₐₓ trajectories on the map of the archaeological site (Bulla Regia). We concluded that the obtained Sₕₘₐₓ trajectories within this site could then be related to the mean direction of ground motion (oscillatory movement of the ground) triggered by a seismic event, as documented for some historical earthquakes across the world. These Sₕₘₐₓ orientations closely match the current active stress field, as highlighted by some instrumental events in northern Tunisia. In terms of the seismic source, we strongly suggest that the reactivation of a neotectonic strike-slip fault trending N50E must be responsible for this probable historic earthquake and the recent instrumental seismicity in this area. This fault segment, affecting the folded quaternary deposits south of Jebel Rebia, passes through the monument of Bulla Regia. Stress inversion of the observed and measured data along this fault shows an N150 - 160 trend of Sₕₘₐₓ under a transpressional tectonic regime, which is quite consistent with the GPS data and the state of the current stress field in this region.Keywords: NW Tunisia, archaeoseismology, earthquake archaeological effect, bulla regia - Chemtou, seismotectonic, neotectonic fault
Procedia PDF Downloads 49232 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy
Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş
Abstract:
Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance
Procedia PDF Downloads 246231 Assessment of Environmental Impact for Rice Mills in Burdwan District: Special Emphasis on Groundwater, Surface Water, Soil, Vegetation and Human Health
Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay
Abstract:
Rice milling is an important activity in agricultural economy of India, particularly the Burdwan district. However, the environmental impact of rice mills is frequently underestimated. The environmental impact of rice mills in the Burdwan district is a major source of concern, given the importance of rice milling in the local economy and food supply. In the Burdwan district, more than fifty (50) rice mills are in operation. The goal of this study is to investigate the effects of rice mills on several environmental components, with a particular emphasis on groundwater, surface water, soil, and vegetation. The research comprises a thorough review of numerous rice mills located around the district, utilising both qualitative and quantitative approaches. Water samples taken from wells near rice mills will be tested for groundwater quality, with an emphasis on factors such as heavy metal pollution and pollutant concentrations. Monitoring rice mill discharge into neighbouring bodies of water and studying the potential impact on aquatic ecosystems will be part of surface water evaluations. Furthermore, soil samples from the surrounding areas will be taken to examine changes in soil characteristics, nutrient content, and potential contamination from milling waste disposal. Vegetation studies will be conducted to investigate the effects of emissions and effluents on plant health and biodiversity in the region. The findings will provide light on the extent of environmental degradation caused by rice mills in the Burdwan district, as well as valuable insight into the effects of such operations on water, soil, and vegetation. The findings will aid in the development of appropriate legislation and regulations to reduce negative environmental repercussions and promote sustainable practises in the rice milling business. In some cases, heavy metals have been related to health problems. Heavy metals (As, Cd, Cu, Pb, Cr, Hg) are linked to skin, lung, brain, kidney, liver, metabolic, spleen, cardiovascular, haematological, immunological, gastrointestinal, testes, pancreatic, metabolic, and bone problems. As a result, this study contributes to a better knowledge of industrial environmental impacts and establishes the framework for future studies aimed at developing a more ecologically balanced and resilient Burdwan district. The following recommendations are offered for reducing the rice mill's environmental impact: To keep untreated effluents out of bodies of water, adequate waste management systems must be established. Use environmentally friendly rice milling processes to reduce pollution. To avoid soil pollution, rice mill by-products should be used as fertiliser in a controlled and appropriate manner. Groundwater, surface water, soil, and vegetation are all regularly monitored in order to study and adapt to environmental changes. By adhering to these principles, the rice milling industry of Burdwan district may achieve long-term growth while lowering its environmental effect and safeguarding the environment for future generations.Keywords: groundwater, environmental analysis, biodiversity, rice mill, waste management, diseases, industrial impact
Procedia PDF Downloads 95230 Students Awareness on Reproductive Health Education in Sri Lanka
Authors: Ayomi Indika Irugalbandara
Abstract:
Reproductive Health (RE) education among Sri Lankan Adolescents (comprising one fifth inner population) remains unsatisfactory despite 91.8% of them completing primary education & 56.2 % receiving post secondary level education. The main reason for this large population not receiving satisfactory RH education is traditional values and longstanding taboos surrounding sexuality. The current study was undertaken with there objectives. The relevance of achieving them being to formulate RH educational policies and programs that address a sizable and sensitive chunk of the population thereby achieving the goal of mental and social well being and not merely the absence of reproductive disease or infirmity. This research was a descriptive study, using random sampling technique, sample of the study consisting of 160 adolescent in the age group of 16-19, studying in government schools in Sri Lanka. Questionnaire was the main instrument of data collection, qualitative and quantitative techniques were used in data analysis. According to the data it was revealed that a majority has some idea about RH education. While this awareness had been provided by the school, the source of information had been Health and Physical Education. The entire sample mentioned that more RH information, than was provided, should be given and everybody wanted further knowledge regarding sexuality, and in depth information on it was essential. About 96 adolescents were of the opinion that their behavior was respectful to elders and 64 felt embarrassed while communicating with elders regarding RH issues. About their preferred sources of information, both genders named health providers as their first choice, followed by family members and friends. The internet was cited by a few boys; less than 5 percent cited religious figures. More than 50% of respondents had no knowledge about abortion and they were unaware of dangerous abortion. The practice of abortion was reported among zero percent. Although every member of the sample did not possess knowledge of the scientific process involved in abortion, all of them totally rejected the idea of destroying a foetus. Adolescence is a critical period in the life of girls and boys and sexuality education empowers young people to protect their health and well-being. Schools have the proper staff, and environment for learning. It might be stated that the greater segment of individuals entering adolescents and going through their adolescence are still in the school. This becomes the reason why it is mandatory that the school should be geared to handle this critical stage of the students. Adolescents or those approaching adolescence are best educated by the relevant parents, but this being quite a sensitive issue in the socio cultural context, it is somewhat doubtful whether all parents are prepared to handle this candidly, due either to lack of knowledge or absence of the appropriate state of mind. As such it is best that seminars/workshops be conducted to enlighten parents on handling HR issues related to their adolescent children. Apart from the awareness on HR provided through the school curriculum a greater impact can be brought about through street dramas, exhibitions etc. specific to HR. Finally the researcher would like to suggest that Sunday schools be harnessed for the provision of HR education linked with cultural values, ethics, and social well-being.Keywords: reproductive health, awareness, perception, school curriculum
Procedia PDF Downloads 545229 Density Determination of Liquid Niobium by Means of Ohmic Pulse-Heating for Critical Point Estimation
Authors: Matthias Leitner, Gernot Pottlacher
Abstract:
Experimental determination of critical point data like critical temperature, critical pressure, critical volume and critical compressibility of high-melting metals such as niobium is very rare due to the outstanding experimental difficulties in reaching the necessary extreme temperature and pressure regimes. Experimental techniques to achieve such extreme conditions could be diamond anvil devices, two stage gas guns or metal samples hit by explosively accelerated flyers. Electrical pulse-heating under increased pressures would be another choice. This technique heats thin wire samples of 0.5 mm diameter and 40 mm length from room temperature to melting and then further to the end of the stable phase, the spinodal line, within several microseconds. When crossing the spinodal line, the sample explodes and reaches the gaseous phase. In our laboratory, pulse-heating experiments can be performed under variation of the ambient pressure from 1 to 5000 bar and allow a direct determination of critical point data for low-melting, but not for high-melting metals. However, the critical point also can be estimated by extrapolating the liquid phase density according to theoretical models. A reasonable prerequisite for the extrapolation is the existence of data that cover as much as possible of the liquid phase and at the same time exhibit small uncertainties. Ohmic pulse-heating was therefore applied to determine thermal volume expansion, and from that density of niobium over the entire liquid phase. As a first step, experiments under ambient pressure were performed. The second step will be to perform experiments under high-pressure conditions. During the heating process, shadow images of the expanding sample wire were captured at a frame rate of 4 × 105 fps to monitor the radial expansion as a function of time. Simultaneously, the sample radiance was measured with a pyrometer operating at a mean effective wavelength of 652 nm. To increase the accuracy of temperature deduction, spectral emittance in the liquid phase is also taken into account. Due to the high heating rates of about 2 × 108 K/s, longitudinal expansion of the wire is inhibited which implies an increased radial expansion. As a consequence, measuring the temperature dependent radial expansion is sufficient to deduce density as a function of temperature. This is accomplished by evaluating the full widths at half maximum of the cup-shaped intensity profiles that are calculated from each shadow image of the expanding wire. Relating these diameters to the diameter obtained before the pulse-heating start, the temperature dependent volume expansion is calculated. With the help of the known room-temperature density, volume expansion is then converted into density data. The so-obtained liquid density behavior is compared to existing literature data and provides another independent source of experimental data. In this work, the newly determined off-critical liquid phase density was in a second step utilized as input data for the estimation of niobium’s critical point. The approach used, heuristically takes into account the crossover from mean field to Ising behavior, as well as the non-linearity of the phase diagram’s diameter.Keywords: critical point data, density, liquid metals, niobium, ohmic pulse-heating, volume expansion
Procedia PDF Downloads 219228 An Interdisciplinary Approach to Investigating Style: A Case Study of a Chinese Translation of Gilbert’s (2006) Eat Pray Love
Authors: Elaine Y. L. Ng
Abstract:
Elizabeth Gilbert’s (2006) biography Eat, Pray, Love describes her travels to Italy, India, and Indonesia after a painful divorce. The author’s experiences with love, loss, search for happiness, and meaning have resonated with a huge readership. As regards the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese, it was first translated by a Taiwanese translator He Pei-Hua and published in Taiwan in 2007 by Make Boluo Wenhua Chubanshe with the fairly catching title “Enjoy! Traveling Alone.” The same translation was translocated to China, republished in simplified Chinese characters by Shanxi Shifan Daxue Chubanshe in 2008 and renamed in China, entitled “To Be a Girl for the Whole Life.” Later on, the same translation in simplified Chinese characters was reprinted by Hunan Wenyi Chubanshe in 2013. This study employs Munday’s (2002) systemic model for descriptive translation studies to investigate the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese by the Taiwanese translator Hu Pei-Hua. It employs an interdisciplinary approach, combining systemic functional linguistics and corpus stylistics with sociohistorical research within a descriptive framework to study the translator’s discursive presence in the text. The research consists of three phases. The first phase is to locate the target text within its socio-cultural context. The target-text context concerning the para-texts, readers’ responses, and the publishers’ orientation will be explored. The second phase is to compare the source text and the target text for the categorization of translation shifts by using the methodological tools of systemic functional linguistics and corpus stylistics. The investigation concerns the rendering of mental clauses and speech and thought presentation. The final phase is an explanation of the causes of translation shifts. The linguistic findings are related to the extra-textual information collected in an effort to ascertain the motivations behind the translator’s choices. There exist sets of possible factors that may have contributed to shaping the textual features of the given translation within a specific socio-cultural context. The study finds that the translator generally reproduces the mental clauses and speech and thought presentation closely according to the original. Nevertheless, the language of the translation has been widely criticized to be unidiomatic and stiff, losing the elegance of the original. In addition, the several Chinese translations of the given text produced by one Taiwanese and two Chinese publishers are basically the same. They are repackaged slightly differently, mainly with the change of the book cover and its captions for each version. By relating the textual findings to the extra-textual data of the study, it is argued that the popularity of the Chinese translation of Gilbert’s (2006) Eat, Pray, Love may not be attributed to the quality of the translation. Instead, it may have to do with the way the work is promoted strategically by the social media manipulated by the four e-bookstores promoting and selling the book online in China.Keywords: chinese translation of eat pray love, corpus stylistics, motivations for translation shifts, systemic approach to translation studies
Procedia PDF Downloads 175227 Identification of ω-3 Fatty Acids Using GC-MS Analysis in Extruded Spelt Product
Authors: Jelena Filipovic, Marija Bodroza-Solarov, Milenko Kosutic, Nebojsa Novkovic, Vladimir Filipovic, Vesna Vucurovic
Abstract:
Spelt wheat is suitable raw material for extruded products such as pasta, special types of bread and other products of altered nutritional characteristics compared to conventional wheat products. During the process of extrusion, spelt is exposed to high temperature and high pressure, during which raw material is also mechanically treated by shear forces. Spelt wheat is growing without the use of pesticides in harsh ecological conditions and in marginal areas of cultivation. So it can be used for organic and health safe food. Pasta is the most popular foodstuff; its consumption has been observed to rise. Pasta quality depends mainly on the properties of flour raw materials, especially protein content and its quality but starch properties are of a lesser importance. Pasta is characterized by significant amounts of complex carbohydrates, low sodium, total fat fiber, minerals, and essential fatty acids and its nutritional value can be improved with additional functional component. Over the past few decades, wheat pasta has been successfully formulated using different ingredients in pasta to cater health-conscious consumers who prefer having a product rich in protein, healthy lipids and other health benefits. Flaxseed flour is used in the production of bakery and pasta products that have properties of functional foods. However, it should be taken into account that food products retain the technological and sensory quality despite the added flax seed. Flaxseed contains important substances in its composition such as vitamins and minerals elements, and it is also an excellent source of fiber and one of the best sources of ω-3 fatty acids and lignin. In this paper, the quality and identification of spelt extruded product with the addition of flax seed, which is positively contributing to the nutritive and technology changes of the product, is investigated. ω-3 fatty acids are polyunsaturated essential fatty acids, and they must be taken with food to satisfy the recommended daily intake. Flaxseed flour is added in the quantity of 10/100 g of sample and 20/100 g of sample on farina. It is shown that the presence of ω-3 fatty acids in pasta can be clearly distinguished from other fatty acids by gas chromatography with mass spectrometry. Addition of flax seed flour influence chemical content of pasta. The addition of flax seed flour in spelt pasta in the quantities of 20g/100 g significantly increases the share of ω-3 fatty acids, which results in improved ratio of ω-6/ω-3 1:2.4 and completely satisfies minimum daily needs of ω-3 essential fatty acids (3.8 g/100 g) recommended by FDA. Flex flour influenced the pasta quality by increasing of hardness (2377.8 ± 13.3; 2874.5 ± 7.4; 3076.3 ± 5.9) and work of shear (102.6 ± 11.4; 150.8 ± 11.3; 165.0 ± 18.9) and increasing of adhesiveness (11.8 ± 20.6; 9.,98 ± 0.12; 7.1 ± 12.5) of the final product. Presented data point at good indicators of technological quality of spelt pasta with flax seed and that GC-MS analysis can be used in the quality control for flax seed identification. Acknowledgment: The research was financed by the Ministry of Education and Science of the Republic of Serbia (Project No. III 46005).Keywords: GC-MS analysis, ω-3 fatty acids, flex seed, spelt wheat, daily needs
Procedia PDF Downloads 162226 Fly-Ash/Borosilicate Glass Based Geopolymers: A Mechanical and Microstructural Investigation
Authors: Gianmarco Taveri, Ivo Dlouhy
Abstract:
Geopolymers are well-suited materials to abate CO2 emission coming from the Portland cement production, and then replace them, in the near future, in building and other applications. The cost of production of geopolymers may be seen the only weakness, but the use of wastes as raw materials could provide a valid solution to this problem, as demonstrated by the successful incorporation of fly-ash, a by-product of thermal power plants, and waste glasses. Recycled glass in waste-derived geopolymers was lately employed as a further silica source. In this work we present, for the first time, the introduction of recycled borosilicate glass (BSG). BSG is actually a waste glass, since it derives from dismantled pharmaceutical vials and cannot be reused in the manufacturing of the original articles. Owing to the specific chemical composition (BSG is an ‘alumino-boro-silicate’), it was conceived to provide the key components of zeolitic networks, such as amorphous silica and alumina, as well as boria (B2O3), which may replace Al2O3 and contribute to the polycondensation process. The solid–state MAS NMR spectroscopy was used to assess the extent of boron oxide incorporation in the structure of geopolymers, and to define the degree of networking. FTIR spectroscopy was utilized to define the degree of polymerization and to detect boron bond vibration into the structure. Mechanical performance was tested by means of 3 point bending (flexural strength), chevron notch test (fracture toughness), compression test (compressive strength), micro-indentation test (Vicker’s hardness). Spectroscopy (SEM and Confocal spectroscopy) was performed on the specimens conducted to failure. FTIR showed a characteristic absorption band attributed to the stretching modes of tetrahedral boron ions, whose tetrahedral configuration is compatible to the reaction product of geopolymerization. 27Al NMR and 29Si NMR spectra were instrumental in understanding the extent of the reaction. 11B NMR spectroscopies evidenced a change of the trigonal boron (BO3) inside the BSG in favor of a quasi-total tetrahedral boron configuration (BO4). Thanks to these results, it was inferred that boron is part of the geopolymeric structure, replacing the Si in the network, similarly to the aluminum, and therefore improving the quality of the microstructure, in favor of a more cross-linked network. As expected, the material gained as much as 25% in compressive strength (45 MPa) compared to the literature, whereas no improvements were detected in flexural strength (~ 5 MPa) and superficial hardness (~ 78 HV). The material also exhibited a low fracture toughness (0.35 MPa*m1/2), with a tangible brittleness. SEM micrographies corroborated this behavior, showing a ragged surface, along with several cracks, due to the high presence of porosity and impurities, acting as preferential points for crack initiation. The 3D pattern of the surface fracture, following the confocal spectroscopy, evidenced an irregular crack propagation, whose proclivity was mainly, but not always, to follow the porosity. Hence, the crack initiation and propagation are largely unpredictable.Keywords: borosilicate glass, characterization, fly-ash, geopolymerization
Procedia PDF Downloads 208225 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case
Authors: Gulhan Sen
Abstract:
Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making
Procedia PDF Downloads 245224 Optimization Of Biogas Production Using Co-digestion Feedstocks Via Anaerobic Technologhy
Authors: E Tolufase
Abstract:
The demand, high costs and health implications of using energy derived from hydrocarbon compound have necessitated the continuous search for alternative source of energy. The World energy market is facing some challenges viz: depletion of fossil fuel reserves, population explosion, lack of energy security, economic and urbanization growth and also, in Nigeria some rural areas still depend largely on wood, charcoal, kerosene, petrol among others, as the sources of their energy. To overcome these short falls in energy supply and demand, as well as taking into consideration the risks from global climate change due to effect of greenhouse gas emissions and other pollutants from fossil fuels’ combustion, brought a lot of attention on efficiently harnessing the renewable energy sources. A very promising among the renewable energy resources for a clean energy technology for power production, vehicle and domestic usage is biogas. Therefore, optimization of biogas yield and quality is imperative. Hence, this study investigated yield and quality of biogas using low cost bio-digester and combination of various feed stocks referred to as co-digestion. Batch/Discontinuous Bio-digester type was used because it was cheap, easy, plausible and appropriate for different substrates used to get the desired results. Three substrates were used; cow dung, chicken droppings and lemon grass digested in five separate 21 litre digesters, A, B, C, D, and E and the gas collection system was designed using locally available materials. For single digestion we had; cow dung, chicken droppings, lemon grass, in Bio-digesters A, B, and C respectively, the co-digested three substrates in different mixed ratio 7:1:2 in digester D and E in ratio 5:3:2. The respective feed-stocks materials were collected locally, digested and analyzed in accordance with standard procedures. They were pre-fermented for a period of 10 days before being introduced into the digesters. They were digested for a retention period of 28 days, the physiochemical parameters namely; pressure, temperature, pH, volume of the gas collector system and volume of biogas produced were all closely monitored and recorded daily. The values of pH and temperature ranged 6.0 - 8.0, and 220C- 350C respectively. For the single substrate, bio-digester A(Cow dung only) produced biogas of total volume 0.1607m3(average volume of 0.0054m3 daily),while B (Chicken droppings ) produced 0.1722m3 (average of 0.0057m3 daily) and C (lemon grass) produced 0.1035m3 (average of 0.0035m3 daily). For the co-digested substrates in bio-digester D the total biogas produced was 0.2007m³ (average volume of 0.0067m³ daily) and bio-digester E produced 0.1991m³ (average volume of 0.0066m³ daily) It’s obvious from the results, that combining different substrates gave higher yields than when a singular feed stock was used and also mixing ratio played some roles in the yield improvement. Bio-digesters D and E contained the same substrates but mixed with different ratios, but higher yield was noticed in D with mixing ratio of 7:1:2 than in E with ratio 5:3:2.Therefore, co-digestion of substrates and mixing proportions are important factors for biogas production optimization.Keywords: anaerobic, batch, biogas, biodigester, digestion, fermentation, optimization
Procedia PDF Downloads 27223 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method
Authors: Jiahui You, Kyung Jae Lee
Abstract:
Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.Keywords: reactive-transport , Shale, Kerogen, precipitation
Procedia PDF Downloads 163222 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 108221 Ascribing Identities and Othering: A Multimodal Discourse Analysis of a BBC Documentary on YouTube
Authors: Shomaila Sadaf, Margarethe Olbertz-Siitonen
Abstract:
This study looks at identity and othering in discourses around sensitive issues in social media. More specifically, the study explores the multimodal resources and narratives through which the other is formed, and identities are ascribed in online spaces. As an integral part of social life, media spaces have become an important site for negotiating and ascribing identities. In line with recent research, identity is seen hereas constructions of belonging which go hand in hand with processes of in- and out-group formations that in some cases may lead to othering. Previous findings underline that identities are neither fixed nor limited but rather contextual, intersectional, and interactively achieved. The goal of this study is to explore and develop an understanding of how people co-construct the ‘other’ and ascribe certain identities in social media using multiple modes. In the beginning of the year 2018, the British government decided to include relationships, sexual orientation, and sex education into the curriculum of state funded primary schools. However, the addition of information related to LGBTQ+in the curriculum has been met with resistance, particularly from religious parents.For example, the British Muslim community has voiced their concerns and protested against the actions taken by the British government. YouTube has been used by news companies to air video stories covering the protest and narratives of the protestors along with the position ofschool officials. The analysis centers on a YouTube video dealing with the protest ofa local group of parents against the addition of information about LGBTQ+ in the curriculum in the UK. The video was posted in 2019. By the time of this study, the videos had approximately 169,000 views andaround 6000 comments. In deference to multimodal nature of YouTube videos, this study utilizes multimodal discourse analysis as a method of choice. The study is still ongoing and therefore has not yet yielded any final results. However, the initial analysis indicates a hierarchy of ascribing identities in the data. Drawing on multimodal resources, the media works with social categorizations throughout the documentary, presenting and classifying involved conflicting parties in the light of their own visible and audible identifications. The protesters can be seen to construct a strong group identity as Muslim parents (e.g., clothing and reference to shared values). While the video appears to be designed as a documentary that puts forward facts, the media does not seem to succeed in taking a neutral position consistently throughout the video. At times, the use of images, soundsand language contributes to the formation of “us” vs. “them”, where the audience is implicitly encouraged to pick a side. Only towards the end of the documentary this problematic opposition is addressed and critically reflected through an expert interview that is – interestingly – visually located outside the previously presented ‘battlefield’. This study contributes to the growing understanding of the discursive construction of the ‘other’ in social media. Videos available online are a rich source for examining how the different social actors ascribe multiple identities and form the other.Keywords: identity, multimodal discourse analysis, othering, youtube
Procedia PDF Downloads 113