Search results for: computer processing of large databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12734

Search results for: computer processing of large databases

1004 Disentangling the Sources and Context of Daily Work Stress: Study Protocol of a Comprehensive Real-Time Modelling Study Using Portable Devices

Authors: Larissa Bolliger, Junoš Lukan, Mitja Lustrek, Dirk De Bacquer, Els Clays

Abstract:

Introduction and Aim: Chronic workplace stress and its health-related consequences like mental and cardiovascular diseases have been widely investigated. This project focuses on the sources and context of psychosocial daily workplace stress in a real-world setting. The main objective is to analyze and model real-time relationships between (1) psychosocial stress experiences within the natural work environment, (2) micro-level work activities and events, and (3) physiological signals and behaviors in office workers. Methods: An Ecological Momentary Assessment (EMA) protocol has been developed, partly building on machine learning techniques. Empatica® wristbands will be used for real-life detection of stress from physiological signals; micro-level activities and events at work will be based on smartphone registrations, further processed according to an automated computer algorithm. A field study including 100 office-based workers with high-level problem-solving tasks like managers and researchers will be implemented in Slovenia and Belgium (50 in each country). Data mining and state-of-the-art statistical methods – mainly multilevel statistical modelling for repeated data – will be used. Expected Results and Impact: The project findings will provide novel contributions to the field of occupational health research. While traditional assessments provide information about global perceived state of chronic stress exposure, the EMA approach is expected to bring new insights about daily fluctuating work stress experiences, especially micro-level events and activities at work that induce acute physiological stress responses. The project is therefore likely to generate further evidence on relevant stressors in a real-time working environment and hence make it possible to advise on workplace procedures and policies for reducing stress.

Keywords: ecological momentary assessment, real-time, stress, work

Procedia PDF Downloads 161
1003 Micro-Analytical Data of Au Mineralization at Atud Gold Deposit, Eastern Desert, Egypt

Authors: A. Abdelnasser, M. Kumral, B. Zoheir, P. Weihed, M. Budakoglu, L. Gumus

Abstract:

Atud gold deposits located at the central part of the Egyptian Eastern Desert of Egypt. It represents the vein-type gold mineralization at the Arabian-Nubian Shield in North Africa. Furthermore, this Au mineralization was closely associated with intense hydrothermal alteration haloes along the NW-SE brittle-ductile shear zone at the mined area. This study reports new data about the mineral chemistry of the hydrothermal and metamorphic minerals as well as the geothermobarometry of the metamorphism and determines the paragenetic interrelationship between Au-bearing sulfides and gangue minerals in Atud gold mine by using the electron microprobe analyses (EMPA). These analyses revealed that the ore minerals associated with gold mineralization are arsenopyrite, pyrite, chalcopyrite, sphalerite, pyrrhotite, tetrahedrite and gersdorffite-cobaltite. Also, the gold is highly associated with arsenopyrite and As-bearing pyrite as well as sphalerite with an average ~70 wt.% Au (+26 wt.% Ag) whereas it occurred either as disseminated grains or along microfractures of arsenopyrite and pyrite in altered wallrocks and mineralized quartz veins. Arsenopyrite occurs as individual rhombic or prismatic zoned grains disseminated in the quartz veins and wallrock and is intergrown with euhedral arsenian pyrite (with ~2 atom % As). Pyrite is As-bearing pyrite that occurs as disseminated subhedral or anhedral zoned grains replacing by chalcopyrite in some samples. Inclusions of sphalerite and pyrrhotite are common in the large pyrite grains. Secondary minerals such as sericite, calcite, chlorite and albite are disseminated either in altered wallrocks or in quartz veins. Sericite is the main secondary and alteration mineral associated with Au-bearing sulfides and calcite. Electron microprobe data of the sericite show that its muscovite component is high in all analyzed flakes (XMs= an average 0.89) and the phengite content (Mg+Fe a.p.f.u.) varies from 0.10 to 0.55 and from 0.13 to 0.29 in wallrocks and mineralized veins respectively. Carbonate occurs either as thin veinlets or disseminated grains in the mineralized quartz vein and/or the wallrocks. It has higher amount of calcite (CaCO3) and low amount of MgCO3 as well as FeCO3 in the wallrocks relative to the quartz veins. Chlorite flakes are associated with arsenopyrite and their electron probe data revealed that they are generally Fe-rich composition (FeOt 20.64–20.10 wt.%) and their composition is clinochlore either pycnochlorite or ripidolite with Al (iv) = 2.30-2.36 pfu and 2.41-2.51 pfu and with narrow range of estimated formation temperatures are (289–295°C) and (301-312°C) for pycnochlorite and ripidolite respectively. Albite is accompanied with chlorite with an Ab content is high in all analyzed samples (Ab= 95.08-99.20).

Keywords: micro-analytical data, mineral chemistry, EMPA, Atud gold deposit, Egypt

Procedia PDF Downloads 326
1002 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 73
1001 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets

Authors: Debjit Ray

Abstract:

Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.

Keywords: genomics, pathogens, genome assembly, superbugs

Procedia PDF Downloads 197
1000 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis

Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli

Abstract:

This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.

Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE

Procedia PDF Downloads 105
999 Study of the Association between Salivary Microbiological Data, Oral Health Indicators, Behavioral Factors, and Social Determinants among Post-COVID Patients Aged 7 to 12 Years in Tbilisi City

Authors: Lia Mania, Ketevan Nanobashvili

Abstract:

Background: The coronavirus disease COVID-19 has become the cause of a global health crisis during the current pandemic. This study aims to fill the paucity of epidemiological studies on the impact of COVID-19 on the oral health of pediatric populations. Methods: It was conducted an observational, cross-sectional study in Georgia, in Tbilisi (capital of Georgia), among 7 to 12-year-old PCR or rapid test-confirmed post-Covid populations in all districts of Tbilisi (10 districts in total). 332 beneficiaries who were infected with Covid within one year were included in the study. The population was selected in schools of Tbilisi according to the principle of cluster selection. A simple random selection took place in the selected clusters. According to this principle, an equal number of beneficiaries were selected in all districts of Tbilisi. By July 1, 2022, according to National Center for Disease Control and Public Health data (NCDC.Ge), the number of test-confirmed cases in the population aged 0-18 in Tbilisi was 115137 children (17.7% of all confirmed cases). The number of patients to be examined was determined by the sample size. Oral screening, microbiological examination of saliva, and administration of oral health questionnaires to guardians were performed. Statistical processing of data was done with SPSS-23. Risk factors were estimated by odds ratio and logistic regression with 95% confidence interval. Results: Statistically reliable differences between the averages of oral health indicators in asymptomatic and symptomatic covid-infected groups are: for caries intensity (DMF+def) t=4.468 and p=0.000, for modified gingival index (MGI) t=3.048, p=0.002, for simplified oral hygiene index (S-OHI) t=4.853; p=0.000. Symptomatic covid-infection has a reliable effect on the oral microbiome (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis); (n=332; 77.3% vs n=332; 58.0%; OR=2.46, 95%CI: 1.318-4.617). According to the logistic regression, it was found that the severity of the covid infection has a significant effect on the frequency of pathogenic and conditionally pathogenic bacteria in the oral cavity B=0.903 AOR=2.467 (CL 1.318-4.617). Symptomatic covid-infection affects oral health indicators, regardless of the presence of other risk factors, such as parental employment status, tooth brushing behaviors, carbohydrate meal, fruit consumption. (p<0.05). Conclusion: Risk factors (parental employment status, tooth brushing behaviors, carbohydrate consumption) were associated with poorer oral health status in a post-Covid population of 7- to 12-year-old children. However, such a risk factor as symptomatic ongoing covid-infection affected the oral microbiome in terms of the abundant growth of pathogenic and conditionally pathogenic bacteria (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis) and further worsened oral health indicators. Thus, a close association was established between symptomatic covid-infection and microbiome changes in the post-covid period; also - between the variables of oral health indicators and the symptomatic course of covid-infection.

Keywords: oral microbiome, COVID-19, population based research, oral health indicators

Procedia PDF Downloads 69
998 The Epidemiology of Dengue in Taiwan during 2014-15: A Descriptive Analysis of the Severe Outbreaks of Central Surveillance System Data

Authors: Chu-Tzu Chen, Angela S. Huang, Yu-Min Chou, Chin-Hui Yang

Abstract:

Dengue is a major public health concern throughout tropical and sub-tropical regions. Taiwan is located in the Pacific Ocean and overlying the tropical and subtropical zones. The island remains humid throughout the year and receives abundant rainfall, and the temperature is very hot in summer at southern Taiwan. It is ideal for the growth of dengue vectors and would be increasing the risk on dengue outbreaks. During the first half of the 20th century, there were three island-wide dengue outbreaks (1915, 1931, and 1942). After almost forty years of dormancy, a DEN-2 outbreak occurred in Liuchiu Township, Pingtung County in 1981. Thereafter, more dengue outbreaks occurred with different scales in southern Taiwan. However, there were more than ten thousands of dengue cases in 2014 and in 2015. It did not only affect human health, but also caused widespread social disruption and economic losses. The study would like to reveal the epidemiology of dengue on Taiwan, especially the severe outbreak in 2015, and try to find the effective interventions in dengue control including dengue vaccine development for the elderly. Methods: The study applied the Notifiable Diseases Surveillance System database of the Taiwan Centers for Disease Control as data source. All cases were reported with the uniform case definition and confirmed by NS1 rapid diagnosis/laboratory diagnosis. Results: In 2014, Taiwan experienced a serious DEN-1 outbreak with 15,492 locally-acquired cases, including 136 cases of dengue hemorrhagic fever (DHF) which caused 21 deaths. However, a more serious DEN-2 outbreak occurred with 43,419 locally-acquired cases in 2015. The epidemic occurred mainly at Tainan City (22,760 cases) and Kaohsiung City (19,723 cases) in southern Taiwan. The age distribution for the cases were mainly adults. There were 228 deaths due to dengue infection, and the case fatality rate was 5.25 ‰. The average age of them was 73.66 years (range 29-96) and 86.84% of them were older than 60 years. Most of them were comorbidities. To review the clinical manifestations of the 228 death cases, 38.16% (N=87) of them were reported with warning signs, while 51.75% (N=118) were reported without warning signs. Among the 87 death cases reported to dengue with warning signs, 89.53% were diagnosed sever dengue and 84% needed the intensive care. Conclusion: The year 2015 was characterized by large dengue outbreaks worldwide. The risk of serious dengue outbreak may increase significantly in the future, and the elderly is the vulnerable group in Taiwan. However, a dengue vaccine has been licensed for use in people 9-45 years of age living in endemic settings at the end of 2015. In addition to carry out the research to find out new interventions in dengue control, developing the dengue vaccine for the elderly is very important to prevent severe dengue and deaths.

Keywords: case fatality rate, dengue, dengue vaccine, the elderly

Procedia PDF Downloads 281
997 Intrinsic Contradictions in Entrepreneurship Development and Self-Development

Authors: Revaz Gvelesiani

Abstract:

The problem of compliance between the state economic policy and entrepreneurial policy of businesses is primarily manifested in the contradictions related to the congruence between entrepreneurship development and self-development strategies. Among various types (financial, monetary, social, etc.) of the state economic policy aiming at the development of entrepreneurship, economic order policy is of special importance. Its goal is to set the framework for both public and private economic activities and achieve coherence between the societal value system and the formation of the economic order framework. Economic order policy, in its turn, involves intrinsic contradiction between the social and the competitive order. Competitive order is oriented on the principle of success, while social order _ on the criteria of need satisfaction, which contradicts, at least partly, to the principles of success. Thus within the economic order policy, on the one hand, the state makes efforts to form social order and expand its frontiers, while, on the other hand, market is determined to establish functioning competitive order and ensure its realization. Locating the adequate spaces for and setting the rational border between the state (social order) and the private (competitive order) activities, represents the phenomenon of the decisive importance from the entrepreneurship development strategy standpoint. In the countries where the above mentioned spaces and borders are “set” correctly, entrepreneurship agents (small, medium-sized and large businesses) achieve great success by means of seizing the respective segments and maintaining the leading positions in the internal, the European and the world markets for a long time. As for the entrepreneurship self-development strategy, above all, it involves: •market identification; •interactions with consumers; •continuous innovations; •competition strategy; •relationships with partners; •new management philosophy, etc. The analysis of compliance between the entrepreneurship strategy and entrepreneurship culture should be the reference point for any kind of internationalization in order to avoid shocks of cultural nature and the economic backwardness. Stabilization can be achieved only when the employee actions reflect the existing culture and the new contents of culture (targeted culture) is turned into the implicit consciousness of the personnel. The future leaders should learn how to manage different cultures. Entrepreneurship can be managed successfully if its strategy and culture are coherent. However, not rarely enterprises (organizations) show various forms of violation of both personal and team actions. If personal and team non-observances appear as the form of influence upon the culture, it will lead to global destruction of the system and structure. This is the entrepreneurship culture pathology that complicates to achieve compliance between the entrepreneurship strategy and entrepreneurship culture. Thus, the intrinsic contradictions of entrepreneurship development and self-development strategies complicate the task of reaching compliance between the state economic policy and the company entrepreneurship policy: on the one hand, there is a contradiction between the social and the competitive order within economic order policy and on the other hand, the contradiction exists between entrepreneurship strategy and entrepreneurship culture within entrepreneurship policy.

Keywords: economic order policy, entrepreneurship, development contradictions, self-development contradictions

Procedia PDF Downloads 328
996 Biotechnological Methods for the Grouting of the Tunneling Space

Authors: V. Ivanov, J. Chu, V. Stabnikov

Abstract:

Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.

Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space

Procedia PDF Downloads 208
995 Explosive Clad Metals for Geothermal Energy Recovery

Authors: Heather Mroz

Abstract:

Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.

Keywords: clad metal, explosion welding, separator material, well casing material, piping material

Procedia PDF Downloads 154
994 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 147
993 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 236
992 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 393
991 Advancing Customer Service Management Platform: Case Study of Social Media Applications

Authors: Iseoluwa Bukunmi Kolawole, Omowunmi Precious Isreal

Abstract:

Social media has completely revolutionized the ways communication used to take place even a decade ago. It makes use of computer mediated technologies which helps in the creation of information and sharing. Social media may be defined as the production, consumption and exchange of information across platforms for social interaction. The social media has become a forum in which customer’s look for information about companies to do business with and request answers to questions about their products and services. Customer service may be termed as a process of ensuring customer’s satisfaction by meeting and exceeding their wants. In delivering excellent customer service, knowing customer’s expectations and where they are reaching out is important in meeting and exceeding customer’s want. Facebook is one of the most used social media platforms among others which also include Twitter, Instagram, Whatsapp and LinkedIn. This indicates customers are spending more time on social media platforms, therefore calls for improvement in customer service delivery over the social media pages. Millions of people channel their issues, complaints, complements and inquiries through social media. This study have being able to identify what social media customers want, their expectations and how they want to be responded to by brands and companies. However, the applied research methodology used in this paper was a mixed methods approach. The authors of d paper used qualitative method such as gathering critical views of experts on social media and customer relationship management to analyse the impacts of social media on customer's satisfaction through interviews. The authors also used quantitative such as online survey methods to address issues at different stages and to have insight about different aspects of the platforms i.e. customer’s and company’s perception about the effects of social media. Thereby exploring and gaining better understanding of how brands make use of social media as a customer relationship management tool. And an exploratory research approach strategy was applied analysing how companies need to create good customer support using social media in order to improve good customer service delivery, customer retention and referrals. Therefore many companies have preferred social media platform application as a medium of handling customer’s queries and ensuring their satisfaction, this is because social media tools are considered more transparent and effective in its operations when dealing with customer relationship management.

Keywords: brands, customer service, information, social media

Procedia PDF Downloads 268
990 Harvesting Value-added Products Through Anodic Electrocatalytic Upgrading Intermediate Compounds Utilizing Biomass to Accelerating Hydrogen Evolution

Authors: Mehran Nozari-Asbemarz, Italo Pisano, Simin Arshi, Edmond Magner, James J. Leahy

Abstract:

Integrating electrolytic synthesis with renewable energy makes it feasible to address urgent environmental and energy challenges. Conventional water electrolyzers concurrently produce H₂ and O₂, demanding additional procedures in gas separation to prevent contamination of H₂ with O₂. Moreover, the oxygen evolution reaction (OER), which is sluggish and has a low overall energy conversion efficiency, does not deliver a significant value product on the electrode surface. Compared to conventional water electrolysis, integrating electrolytic hydrogen generation from water with thermodynamically more advantageous aqueous organic oxidation processes can increase energy conversion efficiency and create value-added compounds instead of oxygen at the anode. One strategy is to use renewable and sustainable carbon sources from biomass, which has a large annual production capacity and presents a significant opportunity to supplement carbon sourced from fossil fuels. Numerous catalytic techniques have been researched in order to utilize biomass economically. Because of its safe operating conditions, excellent energy efficiency, and reasonable control over production rate and selectivity using electrochemical parameters, electrocatalytic upgrading stands out as an appealing choice among the numerous biomass refinery technologies. Therefore, we propose a broad framework for coupling H2 generation from water splitting with oxidative biomass upgrading processes. Four representative biomass targets were considered for oxidative upgrading that used a hierarchically porous CoFe-MOF/LDH @ Graphite Paper bifunctional electrocatalyst, including glucose, ethanol, benzyl, furfural, and 5-hydroxymethylfurfural (HMF). The potential required to support 50 mA cm-2 is considerably lower than (~ 380 mV) the potential for OER. All four compounds can be oxidized to yield liquid byproducts with economic benefit. The electrocatalytic oxidation of glucose to the value-added products, gluconic acid, glucuronic acid, and glucaric acid, was examined in detail. The cell potential for combined H₂ production and glucose oxidation was substantially lower than for water splitting (1.44 V(RHE) vs. 1.82 V(RHE) for 50 mA cm-2). In contrast, the oxidation byproduct at the anode was significantly more valuable than O₂, taking advantage of the more favorable glucose oxidation in comparison to the OER. Overall, such a combination of HER and oxidative biomass valorization using electrocatalysts prevents the production of potentially explosive H₂/O₂mixtures and produces high-value products at both electrodes with lower voltage input, thereby increasing the efficiency and activity of electrocatalytic conversion.

Keywords: biomass, electrocatalytic, glucose oxidation, hydrogen evolution

Procedia PDF Downloads 96
989 Challenges in Environmental Governance: A Case Study of Risk Perceptions of Environmental Agencies Involved in Flood Management in the Hawkesbury-Nepean Region, Australia

Authors: S. Masud, J. Merson, D. F. Robinson

Abstract:

The management of environmental resources requires engagement of a range of stakeholders including public/private agencies and different community groups to implement sustainable conservation practices. The challenge which is often ignored is the analysis of agencies involved and their power relations. One of the barriers identified is the difference in risk perceptions among the agencies involved that leads to disjointed efforts of assessing and managing risks. Wood et al 2012, explains that it is important to have an integrated approach to risk management where decision makers address stakeholder perspectives. This is critical for an effective risk management policy. This abstract is part of a PhD research that looks into barriers to flood management under a changing climate and intends to identify bottlenecks that create maladaptation. Experiences are drawn from international practices in the UK and examined in the context of Australia through exploring the flood governance in a highly flood-prone region in Australia: the Hawkesbury Ne-pean catchment as a case study. In this research study several aspects of governance and management are explored: (i) the complexities created by the way different agencies are involved in assessing flood risks (ii) different perceptions on acceptable flood risk level; (iii) perceptions on community engagement in defining acceptable flood risk level; (iv) Views on a holistic flood risk management approach; and, (v) challenges of centralised information system. The study concludes that the complexity of managing a large catchment is exacerbated by the difference in the way professionals perceive the problem. This has led to: (a) different standards for acceptable risks; (b) inconsistent attempt to set-up a regional scale flood management plan beyond the jurisdictional boundaries: (c) absence of a regional scale agency with license to share and update information (d) Lack of forums for dialogue with insurance companies to ensure an integrated approach to flood management. The research takes the Hawkesbury-Nepean catchment as case example and draws from literary evidence from around the world. In addition, conclusions were extrapolated from eighteen semi-structured interviews from agencies involved in flood risk management in the Hawkesbury-Nepean catchment of NSW, Australia. The outcome of this research is to provide a better understanding of complexity in assessing risks against a rapidly changing climate and contribute towards developing effective risk communication strategies thus enabling better management of floods and achieving increased level of support from insurance companies, real-estate agencies, state and regional risk managers and the affected communities.

Keywords: adaptive governance, flood management, flood risk communication, stakeholder risk perceptions

Procedia PDF Downloads 286
988 Strategies of Translation: Unlocking the Secret of 'Locksley Hall'

Authors: Raja Lahiani

Abstract:

'Locksley Hall' is a poem that Lord Alfred Tennyson (1809-1892) published in 1842. It is believed to be his first attempt to face as a poet some of the most painful of his experiences, as it is a study of his rising out of sickness into health, conquering his selfish sorrow by faith and hope. So far, in Victorian scholarship as in modern criticism, 'Locksley Hall' has been studied and approached as a canonical Victorian English poem. The aim of this project is to prove that some strategies of translation were used in this poem in such a way as to guarantee its assimilation into the English canon and hence efface to a large extent its Arabic roots. In its relationship with its source text, 'Locksley Hall' is at the same time mimetic and imitative. As part of the terminology used in translation studies, ‘imitation’ means almost the exact opposite of what it means in ordinary English. By adopting an imitative procedure, a translator would do something totally different from the original author, wandering far and freely from the words and sense of the original text. An imitation is thus aimed at an audience which wants the work of the particular translator rather than the work of the original poet. Hallam Tennyson, the poet’s biographer, asserts that 'Locksley Hall' is a simple invention of place, incidents, and people, though he notes that he remembers the poet claiming that Sir William Jones’ prose translation of the Mu‘allaqat (pre-Islamic poems) gave him the idea of the poem. A comparative work would prove that 'Locksley Hall' mirrors a great deal of Tennyson’s biography and hence is not a simple invention of details as asserted by his biographer. It would be challenging to prove that 'Locksley Hall' shares so many details with the Mu‘allaqat, as declared by Tennyson himself, that it needs to be studied as an imitation of the Mu‘allaqat of Imru’ al-Qays and ‘Antara in addition to its being a poem in its own right. Thus, the main aim of this work is to unveil the imitative and mimetic strategies used by Tennyson in his composition of 'Locksley Hall.' It is equally important that this project researches the acculturating assimilative tools used by the poet to root his poem in its Victorian English literary, cultural and spatiotemporal settings. This work adopts a comparative methodology. Comparison is done at different levels. The poem will be contextualized in its Victorian English literary framework. Alien details related to structure, socio-spatial setting, imagery and sound effects shall be compared to Arabic poems from the Mu‘allaqat collection. This would determine whether the poem is a translation, an adaption, an imitation or a genuine work. The ultimate objective of the project is to unveil in this canonical poem a new dimension that has for long been either marginalized or ignored. By proving that 'Locksley Hall' is an imitation of classical Arabic poetry, the project aspires to consolidate its literary value and open up new gates of accessing it.

Keywords: comparative literature, imitation, Locksley Hall, Lord Alfred Tennyson, translation, Victorian poetry

Procedia PDF Downloads 201
987 Importance of Different Spatial Parameters in Water Quality Analysis within Intensive Agricultural Area

Authors: Marina Bubalo, Davor Romić, Stjepan Husnjak, Helena Bakić

Abstract:

Even though European Council Directive 91/676/EEC known as Nitrates Directive was adopted in 1991, the issue of water quality preservation in areas of intensive agricultural production still persist all over Europe. High nitrate nitrogen concentrations in surface and groundwater originating from diffuse sources are one of the most important environmental problems in modern intensive agriculture. The fate of nitrogen in soil, surface and groundwater in agricultural area is mostly affected by anthropogenic activity (i.e. agricultural practice) and hydrological and climatological conditions. The aim of this study was to identify impact of land use, soil type, soil vulnerability to pollutant percolation, and natural aquifer vulnerability to nitrate occurrence in surface and groundwater within an intensive agricultural area. The study was set in Varaždin County (northern Croatia), which is under significant influence of the large rivers Drava and Mura and due to that entire area is dominated by alluvial soil with shallow active profile mainly on gravel base. Negative agricultural impact on water quality in this area is evident therefore the half of selected county is a part of delineated nitrate vulnerable zones (NVZ). Data on water quality were collected from 7 surface and 8 groundwater monitoring stations in the County. Also, recent study of the area implied detailed inventory of agricultural production and fertilizers use with the aim to produce new agricultural land use database as one of dominant parameters. The analysis of this database done using ArcGIS 10.1 showed that 52,7% of total County area is agricultural land and 59,2% of agricultural land is used for intensive agricultural production. On the other hand, 56% of soil within the county is classified as soil vulnerable to pollutant percolation. The situation is similar with natural aquifer vulnerability; northern part of the county ranges from high to very high aquifer vulnerability. Statistical analysis of water quality data is done using SPSS 13.0. Cluster analysis group both surface and groundwater stations in two groups according to nitrate nitrogen concentrations. Mean nitrate nitrogen concentration in surface water – group 1 ranges from 4,2 to 5,5 mg/l and in surface water – group 2 from 24 to 42 mg/l. The results are similar, but evidently higher, in groundwater samples; mean nitrate nitrogen concentration in group 1 ranges from 3,9 to 17 mg/l and in group 2 from 36 to 96 mg/l. ANOVA analysis confirmed statistical significance between stations that are classified in the same group. The previously listed parameters (land use, soil type, etc.) were used in factorial correspondence analysis (FCA) to detect importance of each stated parameter in local water quality. Since stated parameters mostly cannot be altered, there is obvious necessity for more precise and more adapted land management in such conditions.

Keywords: agricultural area, nitrate, factorial correspondence analysis, water quality

Procedia PDF Downloads 259
986 Photochemical Behaviour of Carbamazepine in Natural Waters

Authors: Fanny Desbiolles, Laure Malleret, Isabelle Laffont-Schwob, Christophe Tiliacos, Anne Piram, Mohamed Sarakha, Pascal Wong-Wah-Chung

Abstract:

Pharmaceuticals in the environment have become a very hot topic in the recent years. This interest is related to the large amounts dispensed and to their release in urine or faeces from treated patients, resulting in their ubiquitous presence in water resources and wastewater treatment plants (WWTP) effluents. Thereby, many studies focused on the prediction of pharmaceuticals’ behaviour, to assess their fate and impacts in the environment. Carbamazepine is a widely consumed psychotropic pharmaceutical, thus being one of the most commonly detected drugs in the environment. This organic pollutant was proved to be persistent, especially with respect to its non-biodegradability, rendering it recalcitrant to usual biological treatment processes. Consequently, carbamazepine is very little removed in WWTP with a maximum abatement rate of 5 % and is then often released in natural surface waters. To better assess the environmental fate of carbamazepine in aqueous media, its photochemical transformation was undertaken in four natural waters (two French rivers, the Berre salt lagoon, Mediterranean Sea water) representative of coastal and inland water types. Kinetic experiments were performed in the presence of light using simulated solar irradiation (Xe lamp 300W). Formation of short-lifetime species was highlighted using chemical trap and laser flash photolysis (nanosecond). Identification of transformation by-products was assessed by LC-QToF-MS analyses. Carbamazepine degradation was observed after a four-day exposure and an abatement of 20% maximum was measured yielding to the formation of many by-products. Moreover, the formation of hydroxyl radicals (•OH) was evidenced in waters using terephthalic acid as a probe, considering the photochemical instability of its specific hydroxylated derivative. Correlations were implemented using carbamazepine degradation rate, estimated hydroxyl radical formation and chemical contents of waters. In addition, laser flash photolysis studies confirmed •OH formation and allowed to evidence other reactive species, such as chloride (Cl2•-)/bromine (Br2•-) and carbonate (CO3•-) radicals in natural waters. Radicals mainly originate from dissolved phase and their occurrence and abundance depend on the type of water. Rate constants between reactive species and carbamazepine were determined by laser flash photolysis and competitive reactions experiments. Moreover, LC-QToF-MS analyses of by-products help us to propose mechanistic pathways. The results will bring insights to the fate of carbamazepine in various water types and could help to evaluate more precisely potential ecotoxicological effects.

Keywords: carbamazepine, kinetic and mechanistic approaches, natural waters, photodegradation

Procedia PDF Downloads 380
985 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 224
984 Investigation of the Usability of Biochars Obtained from Olive Pomace and Smashed Olive Seeds as Additives for Bituminous Binders

Authors: Muhammed Ertugrul Celoglu, Beyza Furtana, Mehmet Yilmaz, Baha Vural Kok

Abstract:

Biomass, which is considered to be one of the largest renewable energy sources in the world, has a potential to be utilized as a bitumen additive after it is processed by a wide variety of thermochemical methods. Furthermore, biomasses are renewable in short amounts of time, and they possess a hydrocarbon structure. These characteristics of biomass promote their usability as additives. One of the most common ways to create materials with significant economic values from biomasses is the processes of pyrolysis. Pyrolysis is defined as the process of an organic matter’s thermochemical degradation (carbonization) at a high temperature and in an anaerobic environment. The resultant liquid substance at the end of the pyrolysis is defined as bio-oil, whereas the resultant solid substance is defined as biochar. Olive pomace is the resultant mildly oily pulp with seeds after olive is pressed and its oil is extracted. It is a significant source of biomass as the waste of olive oil factories. Because olive pomace is waste material, it could create problems just as other waste unless there are appropriate and acceptable areas of utilization. The waste material, which is generated in large amounts, is generally used as fuel and fertilizer. Generally, additive materials are used in order to improve the properties of bituminous binders, and these are usually expensive materials, which are produced chemically. The aim of this study is to investigate the usability of biochars obtained after subjecting olive pomace and smashed olive seeds, which are considered as waste materials, to pyrolysis as additives in bitumen modification. In this way, various ways of use will be provided for waste material, providing both economic and environmental benefits. In this study, olive pomace and smashed olive seeds were used as sources of biomass. Initially, both materials were ground and processed through a No.50 sieve. Both of the sieved materials were subjected to pyrolysis (carbonization) at 400 ℃. Following the process of pyrolysis, bio-oil and biochar were obtained. The obtained biochars were added to B160/220 grade pure bitumen at 10% and 15% rates and modified bitumens were obtained by mixing them in high shear mixtures at 180 ℃ for 1 hour at 2000 rpm. Pure bitumen and four different types of bitumen obtained as a result of the modifications were tested with penetration, softening point, rotational viscometer, and dynamic shear rheometer, evaluating the effects of additives and the ratios of additives. According to the test results obtained, both biochar modifications at both ratios provided improvements in the performance of pure bitumen. In the comparison of the test results of the binders modified with the biochars of olive pomace and smashed olive seed, it was revealed that there was no notable difference in their performances.

Keywords: bituminous binders, biochar, biomass, olive pomace, pomace, pyrolysis

Procedia PDF Downloads 132
983 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 256
982 Development of Beeswax-Discharge Writing Material for Visually Impaired Persons

Authors: K. Doi, T. Nishimura, H. Fujimoto, T. Tanaka

Abstract:

It has been known that visually impaired persons have some problems in getting visual information. Therefore, information accessibility for the visually impaired persons is very important in a current information society. Some application software with read-aloud function for using personal computer and smartphone are getting more and more popular among visually impaired persons in the world. On the other hand, it is also very important for being able to learn how to read and write characters such as Braille and Visual character. Braille typewriter has been widely used in learning Braille. And also raised-line drawing kits as writing material has been used for decades for especially acquired visually impaired persons. However, there are some drawbacks such as the drawn line cannot be erased. Moreover, visibility of drawing lines is not so good for visually impaired with low vision. We had significant number of requests for developing new writing material for especially acquired visually impaired persons instead of raised-line drawing kits. For conducting development research of novel writing material, we could receive a research grant from ministry of health, labor and welfare in Japanese government. In this research, we developed writing material typed pens and pencils with Beeswax-discharge instead of conventional raised-line drawing kits. This writing material was equipped with cartridge heater for melting beeswax and its heat controller. When this pen users held down the pen tip on the regular paper such as fine paper and so on, the melted beeswax could be discharged from pen tip with valve structure. The beeswax was discharged at 100 gf of holding down force based on results of our previous trial study. The shape of pen tip was semispherical for becoming low friction between pen tip and surface of paper. We conducted one basic experiment to evaluate influence of the curvature of pen tip on ease to write. Concretely, the conditions of curvature was 0.15, 0.35, 0.50, 1.00 mm. The following four interval scales were used as indexes of subjective assessment during writing such as feeling of smooth motion of pen, feeling of comfortable writing, sense of security and feeling of writing fatigue. Ten subjects were asked to participate in this experiment. The results reveal that subjects could draw easily when the radius of the pen tip was 1.00 mm, and lines drawn with beeswax-discharge writing material were easy to perceive.

Keywords: beeswax-discharge writing material, raised-line drawing kits, visually impaired persons, pen tip

Procedia PDF Downloads 308
981 Queer Anti-Urbanism: An Exploration of Queer Space Through Design

Authors: William Creighton, Jan Smitheram

Abstract:

Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?

Keywords: queer, queer anti-urbanism, design as research, design

Procedia PDF Downloads 176
980 Development Project, Land Acquisition and Rehabilitation: A Study of Navi Mumbai International Airport Project, India

Authors: Rahul Rajak, Archana Kumari Roy

Abstract:

Purpose: Development brings about structural change in the society. It is essential for socio-economic progress of the society, but it also causes pain to the people who are forced to displace from their motherland. Most of the people who are displaced due to development are poor and tribes. Development and displacement are interlinked with each other in the sense development sometimes leads to displacement of people. These studies mainly focus on socio-economic profile of villages and villagers likely to be affected by the Airport Project and they examine the issues of compensation and people’s level of satisfaction. Methodology: The study is based on Descriptive design; it is basically observational and correlation study. Primary data is used in this study. Considering the time and resource constrains, 100 people were interviewed covering socio-economic and demographic diversities from 6 out of 10 affected villages. Due to Navi Mumbai International Airport Project ten villages have to be displaced. Out of ten villages, this study is based on only six villages. These are Ulwe, Ganeshpuri, Targhar Komberbuje, Chincpada and Kopar. All six villages situated in Raigarh district under the Taluka Panvel in Maharashtra. Findings: It is revealed from the survey that there are three main castes of affected villages that are Agri, Koli, and Kradi. Entire village population of migrated person is very negligible. All three caste have main occupation are agricultural and fishing activities. People’s perception revealed that due to the establishment of the airport project, they may have more opportunities and scope of development rather than the adverse effect, but vigorously leave a motherland is psychological effect of the villagers. Research Limitation: This study is based on only six villages, the scenario of the entire ten affected villages is not explained by this research. Practical implication: The scenario of displacement and resettlement signifies more than a mere physical relocation. Compensation is not only hope for villagers, is it only give short time relief. There is a need to evolve institutions to protect and strengthen the right of Individuals. The development induced displacement exposed them to a new reality, the reality of their legality and illegality of stay on the land which belongs to the state. Originality: Mumbai has large population and high industrialized city have put land at the center of any policy implication. This paper demonstrates through the actual picture gathered from the field that how seriously the affected people suffered and are still suffering because of the land acquisition for the Navi Mumbai International Airport Project. The whole picture arise the question which is how long the government can deny the rights to farmers and agricultural laborers and remain unwilling to establish the balance between democracy and development.

Keywords: compensation, displacement, land acquisition, project affected person (PAPs), rehabilitation

Procedia PDF Downloads 316
979 'Innovations among People' in Selected Social Economy Enterprises in Poland

Authors: Hanna Kroczak

Abstract:

In Poland, the system of social and professional reintegration of people at risk of social exclusion is, in fact, based on the activity of social economy enterprises. Playing this significant role these entities have to cope with various problems, related to the necessity of being successful on the open market, location on the peripheral (especially rural) areas or the “socialist heritage” in social and economic relations, which is certainly not favorable for implementing the idea of activation policy. One of the main objectives of the project entitled “Innovation among people. The analysis of the innovations creation and implementation in companies and social economy enterprises operating in Poland”, was to investigate the innovativeness of Polish social economy entities as a possible way for them to be prosperous (the project was funded by the Polish National Science Centre grant on the decision DEC-2013/11/B/HS4/00691). The ethnographic research in this matter was conducted in 2015 in two parts: six three-day studies using participant observation and individual in-depth interview (IDI) techniques (in three social cooperatives and three social integration centres) and two one-month shadowings (in one social cooperative and one social integration centre). Enterprises were selected from various provinces in Poland on the basis of data from previous computer-assisted telephone interviewing (CATI) research, where they declared that innovation management is a central element of their strategy. The ethnographic study revealed that they, indeed, create innovations and the main types of them are social and organisational innovations – but not always and not all the employees are aware of that. Moreover, it turned out that wherever the research was conducted, researchers found some similar opportunities of innovations creating process, like a “charismatic leader”, true passion and commitment not depended on the earned money or building local institutional networks, and similar threats, e.g. under-staffed offices or the great bureaucracy of some institutions. The primary conclusion for the studied entities is that being innovative is not only their challenge and opportunity for well-being at the same time, but even a necessity, something deeply rooted in their specific organisational structures. Explanations and illustrations for the statements above will be presented in the proposed paper.

Keywords: ethnographic research, innovation, Polish social economy, professional reintegration, social economy enterprises, social reintegration

Procedia PDF Downloads 206
978 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 356
977 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling

Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes

Abstract:

Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.

Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling

Procedia PDF Downloads 88
976 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 34
975 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen

Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev

Abstract:

The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).

Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms

Procedia PDF Downloads 90