Search results for: large deflection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7222

Search results for: large deflection

802 Mirna Expression Profile is Different in Human Amniotic Mesenchymal Stem Cells Isolated from Obese Respect to Normal Weight Women

Authors: Carmela Nardelli, Laura Iaffaldano, Valentina Capobianco, Antonietta Tafuto, Maddalena Ferrigno, Angela Capone, Giuseppe Maria Maruotti, Maddalena Raia, Rosa Di Noto, Luigi Del Vecchio, Pasquale Martinelli, Lucio Pastore, Lucia Sacchetti

Abstract:

Maternal obesity and nutrient excess in utero increase the risk of future metabolic diseases in the adult life. The mechanisms underlying this process are probably based on genetic, epigenetic alterations and changes in foetal nutrient supply. In mammals, the placenta is the main interface between foetus and mother, it regulates intrauterine development, modulates adaptive responses to sub optimal in uterus conditions and it is also an important source of human amniotic mesenchymal stem cells (hA-MSCs). We previously highlighted a specific microRNA (miRNA) profiling in amnion from obese (Ob) pregnant women, here we compared the miRNA expression profile of hA-MSCs isolated from (Ob) and control (Co) women, aimed to search for any alterations in metabolic pathways that could predispose the new-born to the obese phenotype. Methods: We isolated, at delivery, hA-MSCs from amnion of 16 Ob- and 7 Co-women with pre-pregnancy body mass index (mean/SEM) 40.3/1.8 and 22.4/1.0 kg/m2, respectively. hA-MSCs were phenotyped by flow cytometry. Globally, 384 miRNAs were evaluated by the TaqMan Array Human MicroRNA Panel v 1.0 (Applied Biosystems). By the TargetScan program we selected the target genes of the miRNAs differently expressed in Ob- vs Co-hA-MSCs; further, by KEGG database, we selected the statistical significant biological pathways. Results: The immunophenotype characterization confirmed the mesenchymal origin of the isolated hA-MSCs. A large percentage of the tested miRNAs, about 61.4% (232/378), was expressed in hA-MSCs, whereas 38.6% (146/378) was not. Most of the expressed miRNAs (89.2%, 207/232) did not differ between Ob- and Co-hA-MSCs and were not further investigated. Conversely, 4.8% of miRNAs (11/232) was higher and 6.0% (14/232) was lower in Ob- vs Co-hA-MSCs. Interestingly, 7/232 miRNAs were obesity-specific, being expressed only in hA-MSCs isolated from obese women. Bioinformatics showed that these miRNAs significantly regulated (P<0.001) genes belonging to several metabolic pathways, i.e. MAPK signalling, actin cytoskeleton, focal adhesion, axon guidance, insulin signaling, etc. Conclusions: Our preliminary data highlight an altered miRNA profile in Ob- vs Co-hA-MSCs and suggest that an epigenetic miRNA-based mechanism of gene regulation could affect pathways involved in placental growth and function, thereby potentially increasing the newborn’s risk of metabolic diseases in the adult life.

Keywords: hA-MSCs, obesity, miRNA, biosystem

Procedia PDF Downloads 526
801 Impact of Helicobacter pylori Infection on Colorectal Adenoma-Colorectal Carcinoma Sequence

Authors: Jannis Kountouras, Nikolaos Kapetanakis, Stergios A. Polyzos, Apostolis Papaeftymiou, Panagiotis Katsinelos, Ioannis Venizelos, Christina Nikolaidou, Christos Zavos, Iordanis Romiopoulos, Elena Tsiaousi, Evangelos Kazakos, Michael Doulberis

Abstract:

Background & Aims: Helicobacter pylori infection (Hp-I) has been recognized as a substantial risk agent involved in gastrointestinal (GI) tract oncogenesis by stimulating cancer stem cells (CSCs), oncogenes, immune surveillance processes, and triggering GI microbiota dysbiosis. We aimed to investigate the possible involvement of active Hp-I in the sequence: chronic inflammation–adenoma–colorectal cancer (CRC) development. Methods: Four pillars were investigated: (i) endoscopic and conventional histological examinations of patients with CRC, colorectal adenomas (CRA) versus controls to detect the presence of active Hp-I; (ii) immunohistochemical determination of the presence of Hp; expression of CD44, an indicator of CSCs and/or bone marrow-derived stem cells (BMDSCs); expressions of oncogene Ki67 and anti-apoptotic Bcl-2 protein; (iii) expression of CD45, indicator of immune surveillance locally (assessing mainly T and B lymphocytes locally); and (iv) correlation of the studied parameters with the presence or absence of Hp-I. Results: Among 50 patients with CRC, 25 with CRA, and 10 controls, a significantly higher presence of Hp-I in the CRA (68%) and CRC group (84%) were found compared with controls (30%). The presence of Hp-I with accompanying immunohistochemical expression of CD44 in biopsy specimens was revealed in a high proportion of patients with CRA associated with moderate/severe dysplasia (88%) and CRC patients with moderate/severe degree of malignancy (91%). Comparable results were also obtained for Ki67, Bcl-2, and CD45 immunohistochemical expressions. Concluding Remarks: Hp-I seems to be involved in the sequence: CRA – dysplasia – CRC, similarly to the upper GI tract oncogenesis, by several pathways such as the following: Beyond Hp-I associated insulin resistance, the major underlying mechanism responsible for the metabolic syndrome (MetS) that increase the risk of colorectal neoplasms, as implied by other Hp-I related MetS pathologies, such as non-alcoholic fatty liver disease and upper GI cancer, the disturbance of the normal GI microbiota (i.e., dysbiosis) and the formation of an irritative biofilm could contribute to a perpetual inflammatory upper GIT and colon mucosal damage, stimulating CSCs or recruiting BMDSCs and affecting oncogenes and immune surveillance processes. Further large-scale relative studies with a pathophysiological perspective are necessary to demonstrate in-depth this relationship.

Keywords: Helicobacter pylori, colorectal cancer, colorectal adenomas, gastrointestinal oncogenesis

Procedia PDF Downloads 145
800 Between Leader-Member Exchange and Toxic Leadership: A Theoretical Review

Authors: Aldila Dyas Nurfitri

Abstract:

Nowadays, leadership has became the one of main issues in forming organization groups even countries. The concept of a social contract between the leaders and subordinates become one of the explanations for the leadership process. The interests of the two parties are not always the same, but they must work together to achieve both goals. Based on the concept at the previous it comes “The Leader Member Exchange Theory”—well known as LMX Theory, which assumes that leadership is a process of social interaction interplay between the leaders and their subordinates. High-quality LMX relationships characterized by a high carrying capacity, informal supervision, confidence, and power negotiation enabled, whereas low-quality LMX relationships are described by low support, large formal supervision, less or no participation of subordinates in decision-making, and less confidence as well as the attention of the leader Application of formal supervision system in a low LMX behavior was in line with strict controls on toxic leadership model. Leaders must be able to feel toxic control all aspects of the organization every time. Leaders with this leadership model does not give autonomy to the staff. This behavior causes stagnation and make a resistant organizational culture in an organization. In Indonesia, the pattern of toxic leadership later evolved into a dysfunctional system that is growing rapidly. One consequence is the emergence of corrupt behavior. According to Kellerman, corruption is defined as a pattern and some subordinates behave lie, cheat or steal to a degree that goes beyond the norm, they put self-interest than the common good.According to the corruption data in Indonesia based on the results of ICW research on 2012 showed that the local government sector ranked first with 177 cases. Followed by state or local enterprises as much as 41 cases. LMX is defined as the quality of the relationship between superiors and subordinates are implications for the effectiveness and progress of the organization. The assumption of this theory that leadership as a process of social interaction interplay between the leaders and his followers are characterized by a number of dimensions, such as affection, loyalty, contribution, and professional respect. Meanwhile, the toxic leadership is dysfunctional leadership in organization that is led by someone with the traits are not able to adjust, do not have integrity, malevolent, evil, and full of discontent marked by a number of characteristics, such as self-centeredness, exploiting others, controlling behavior, disrespecting others, suppress innovation and creativity of employees, and inadequate emotional intelligence. The leaders with some characteristics, such as high self-centeredness, exploiting others, controlling behavior, and disrespecting others, tends to describe a low LMX relationships directly with subordinates compared with low self-centeredness, exploiting others, controlling behavior, and disrespecting others. While suppress innovation and creativity of employees aspect and inadequate emotional intelligence, tend not to give direct effect to the low quality of LMX.

Keywords: leader-member exchange, toxic leadership, leadership

Procedia PDF Downloads 487
799 Assessing Autism Spectrum Disorders (ASD) Challenges in Young Children in Dubai: A Qualitative Study, 2016

Authors: Kadhim Alabady

Abstract:

Background: Autism poses a particularly large public health challenge and an inspiring lifelong challenge for many families; it is a lifelong challenge of a different kind. Purpose: Therefore, it is important to understand what the key challenges are and how to improve the lives of children who are affected with autism in Dubai. Method: In order to carry out this research we have used a qualitative methodology. We performed structured in–depth interviews and focus groups with mental health professionals working at: Al Jalila hospital (AJH), Dubai Autism Centre (DAC), Dubai Rehabilitation Centre for Disabilities, Latifa hospital, Private Sector Healthcare (PSH). In addition to that, we conducted quantitative approach to estimate ASD prevalence or incidence data due to lack of registry. ASD estimates are based on research from national and international documents. This approach was applied to increase the validity of the findings by using a variety of data collection techniques in order to explore issues that might not be highlighted through one method alone. Key findings: Autism is the most common of the Pervasive Developmental Disorders. Dubai Autism Center estimates it affects 1 in 146 births (0.68%). If we apply these estimates to the total number of births in Dubai for 2014, it is predicted there would be approximately 199 children (of which 58 were Nationals and 141 were Non–Nationals) suffering from autism at some stage. 16.4% of children (through their families) seek help for ASD assessment between the age group 6–18+. It is critical to understand and address factors for seeking late–stage diagnosis, as ASD can be diagnosed much earlier and how many of these later presenters are actually diagnosed with ASD. Autism spectrum disorder (ASD) is a public health concern in Dubai. Families do not consult GPs for early diagnosis for a variety of reasons including cultural reasons. Recommendations: Effective school health strategies is needed and implemented by nurses who are qualified and experienced in identifying children with ASD. There is a need for the DAC to identify and develop a closer link with neurologists specializing in Autism, to work alongside and for referrals. Autism can be attributed to many factors, some of those are neurological. Currently, when families need their child to see a neurologist they have to go independently and search through the many that are available in Dubai and who are not necessarily specialists in Autism. Training of GP’s to aid early diagnosis of Autism and increase awareness. Since not all GP’s are trained to make such assessments increasing awareness about where to send families for a complete assessment and the necessary support. There is an urgent need for an adult autism center for when the children leave the safe environment of the school at 18 years. These individuals require a day center or suitable job training/placements where appropriate. There is a need for further studies to cover the needs of people with an Autism Spectrum Disorder (ASD).

Keywords: autism spectrum disorder, autism, pervasive developmental disorders, incidence

Procedia PDF Downloads 219
798 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 113
797 Organic Matter Distribution in Bazhenov Source Rock: Insights from Sequential Extraction and Molecular Geochemistry

Authors: Margarita S. Tikhonova, Alireza Baniasad, Anton G. Kalmykov, Georgy A. Kalmykov, Ralf Littke

Abstract:

There is a high complexity in the pore structure of organic-rich rocks caused by the combination of inter-particle porosity from inorganic mineral matter and ultrafine intra-particle porosity from both organic matter and clay minerals. Fluids are retained in that pore space, but there are major uncertainties in how and where the fluids are stored and to what extent they are accessible or trapped in 'closed' pores. A large degree of tortuosity may lead to fractionation of organic matter so that the lighter and flexible compounds would diffuse to the reservoir whereas more complicated compounds may be locked in place. Additionally, parts of hydrocarbons could be bound to solid organic matter –kerogen– and mineral matrix during expulsion and migration. Larger compounds can occupy thin channels so that clogging or oil and gas entrapment will occur. Sequential extraction of applying different solvents is a powerful tool to provide more information about the characteristics of trapped organic matter distribution. The Upper Jurassic – Lower Cretaceous Bazhenov shale is one of the most petroliferous source rock extended in West Siberia, Russia. Concerning the variable mineral composition, pore space distribution and thermal maturation, there are high uncertainties in distribution and composition of organic matter in this formation. In order to address this issue geological and geochemical properties of 30 samples including mineral composition (XRD and XRF), structure and texture (thin-section microscopy), organic matter contents, type and thermal maturity (Rock-Eval) as well as molecular composition (GC-FID and GC-MS) of different extracted materials during sequential extraction were considered. Sequential extraction was performed by a Soxhlet apparatus using different solvents, i.e., n-hexane, chloroform and ethanol-benzene (1:1 v:v) first on core plugs and later on pulverized materials. The results indicate that the studied samples are mainly composed of type II kerogen with TOC contents varied from 5 to 25%. The thermal maturity ranged from immature to late oil window. Whereas clay contents decreased with increasing maturity, the amount of silica increased in the studied samples. According to molecular geochemistry, stored hydrocarbons in open and closed pore space reveal different geochemical fingerprints. The results improve our understanding of hydrocarbon expulsion and migration in the organic-rich Bazhenov shale and therefore better estimation of hydrocarbon potential for this formation.

Keywords: Bazhenov formation, bitumen, molecular geochemistry, sequential extraction

Procedia PDF Downloads 169
796 Nanostructured Pt/MnO2 Catalysts and Their Performance for Oxygen Reduction Reaction in Air Cathode Microbial Fuel Cell

Authors: Maksudur Rahman Khan, Kar Min Chan, Huei Ruey Ong, Chin Kui Cheng, Wasikur Rahman

Abstract:

Microbial fuel cells (MFCs) represent a promising technology for simultaneous bioelectricity generation and wastewater treatment. Catalysts are significant portions of the cost of microbial fuel cell cathodes. Many materials have been tested as aqueous cathodes, but air-cathodes are needed to avoid energy demands for water aeration. The sluggish oxygen reduction reaction (ORR) rate at air cathode necessitates efficient electrocatalyst such as carbon supported platinum catalyst (Pt/C) which is very costly. Manganese oxide (MnO2) was a representative metal oxide which has been studied as a promising alternative electrocatalyst for ORR and has been tested in air-cathode MFCs. However, the single MnO2 has poor electric conductivity and low stability. In the present work, the MnO2 catalyst has been modified by doping Pt nanoparticle. The goal of the work was to improve the performance of the MFC with minimum Pt loading. MnO2 and Pt nanoparticles were prepared by hydrothermal and sol-gel methods, respectively. Wet impregnation method was used to synthesize Pt/MnO2 catalyst. The catalysts were further used as cathode catalysts in air-cathode cubic MFCs, in which anaerobic sludge was inoculated as biocatalysts and palm oil mill effluent (POME) was used as the substrate in the anode chamber. The as-prepared Pt/MnO2 was characterized comprehensively through field emission scanning electron microscope (FESEM), X-Ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and cyclic voltammetry (CV) where its surface morphology, crystallinity, oxidation state and electrochemical activity were examined, respectively. XPS revealed Mn (IV) oxidation state and Pt (0) nanoparticle metal, indicating the presence of MnO2 and Pt. Morphology of Pt/MnO2 observed from FESEM shows that the doping of Pt did not cause change in needle-like shape of MnO2 which provides large contacting surface area. The electrochemical active area of the Pt/MnO2 catalysts has been increased from 276 to 617 m2/g with the increase in Pt loading from 0.2 to 0.8 wt%. The CV results in O2 saturated neutral Na2SO4 solution showed that MnO2 and Pt/MnO2 catalysts could catalyze ORR with different catalytic activities. MFC with Pt/MnO2 (0.4 wt% Pt) as air cathode catalyst generates a maximum power density of 165 mW/m3, which is higher than that of MFC with MnO2 catalyst (95 mW/m3). The open circuit voltage (OCV) of the MFC operated with MnO2 cathode gradually decreased during 14 days of operation, whereas the MFC with Pt/MnO2 cathode remained almost constant throughout the operation suggesting the higher stability of the Pt/MnO2 catalyst. Therefore, Pt/MnO2 with 0.4 wt% Pt successfully demonstrated as an efficient and low cost electrocatalyst for ORR in air cathode MFC with higher electrochemical activity, stability and hence enhanced performance.

Keywords: microbial fuel cell, oxygen reduction reaction, Pt/MnO2, palm oil mill effluent, polarization curve

Procedia PDF Downloads 552
795 A Paradigm Shift in the Cost of Illness of Type 2 Diabetes Mellitus over a Decade in South India: A Prevalence Based Study

Authors: Usha S. Adiga, Sachidanada Adiga

Abstract:

Introduction: Diabetes Mellitus (DM) is one of the most common non-communicable diseases which imposes a large economic burden on the global health-care system. Cost of illness studies in India have assessed the health care cost of DM, but have certain limitations due to lack of standardization of the methods used, improper documentation of data, lack of follow up, etc. The objective of the study was to estimate the cost of illness of uncomplicated versus complicated type 2 diabetes mellitus in Coastal Karnataka, India. The study also aimed to find out the trend of cost of illness of the disease over a decade. Methodology: A prevalence based bottom-up approach study was carried out in two tertiary care hospitals located in Coastal Karnataka after ethical approval. Direct Medical costs like annual laboratory costs, pharmacy cost, consultation charges, hospital bed charges, surgical /intervention costs of 238 diabetics and 340 diabetic patients respectively from two hospitals were obtained from the medical record sections. Patients were divided into six groups, uncomplicated diabetes, diabetic retinopathy(DR), nephropathy(DN), neuropathy(DNeu), diabetic foot(DF), and ischemic heart disease (IHD). Different costs incurred in 2008 and 2017 in these groups were compared, to study the trend of cost of illness. Kruskal Wallis test followed by Dunn’s test were used to compare median costs between the groups and Spearman's correlation test was used for correlation studies. Results: Uncomplicated patients had significantly lower costs (p <0.0001) compared to other groups. Patients with IHD had highest Medical expenses (p < 0.0001), followed by DN and DF (p < 0.0001 ). Annual medical costs incurred were 1.8, 2.76, 2.77, 1.76, and 4.34 times higher in retinopathy, nephropathy, diabetic foot, neuropathy and IHD patients as compared to the cost incurred in managing uncomplicated diabetics. Other costs also showed a similar pattern of rising. A positive correlation was observed between the costs incurred and duration of diabetes, a negative correlation between the glycemic status and cost incurred. The cost incurred in the management of DM in 2017 was found to be elevated 1.4 - 2.7 times when compared to that in 2008. Conclusion: It is evident from the study that the economic burden due to diabetes mellitus is substantial. It poses a significant financial burden on the healthcare system, individual and society as a whole. There is a need for the strategies to achieve optimal glycemic control and operationalize regular and early screening methods for complications so as to reduce the burden of the disease.

Keywords: COI, diabetes mellitus, a bottom up approach, economics

Procedia PDF Downloads 115
794 Micro-Analytical Data of Au Mineralization at Atud Gold Deposit, Eastern Desert, Egypt

Authors: A. Abdelnasser, M. Kumral, B. Zoheir, P. Weihed, M. Budakoglu, L. Gumus

Abstract:

Atud gold deposits located at the central part of the Egyptian Eastern Desert of Egypt. It represents the vein-type gold mineralization at the Arabian-Nubian Shield in North Africa. Furthermore, this Au mineralization was closely associated with intense hydrothermal alteration haloes along the NW-SE brittle-ductile shear zone at the mined area. This study reports new data about the mineral chemistry of the hydrothermal and metamorphic minerals as well as the geothermobarometry of the metamorphism and determines the paragenetic interrelationship between Au-bearing sulfides and gangue minerals in Atud gold mine by using the electron microprobe analyses (EMPA). These analyses revealed that the ore minerals associated with gold mineralization are arsenopyrite, pyrite, chalcopyrite, sphalerite, pyrrhotite, tetrahedrite and gersdorffite-cobaltite. Also, the gold is highly associated with arsenopyrite and As-bearing pyrite as well as sphalerite with an average ~70 wt.% Au (+26 wt.% Ag) whereas it occurred either as disseminated grains or along microfractures of arsenopyrite and pyrite in altered wallrocks and mineralized quartz veins. Arsenopyrite occurs as individual rhombic or prismatic zoned grains disseminated in the quartz veins and wallrock and is intergrown with euhedral arsenian pyrite (with ~2 atom % As). Pyrite is As-bearing pyrite that occurs as disseminated subhedral or anhedral zoned grains replacing by chalcopyrite in some samples. Inclusions of sphalerite and pyrrhotite are common in the large pyrite grains. Secondary minerals such as sericite, calcite, chlorite and albite are disseminated either in altered wallrocks or in quartz veins. Sericite is the main secondary and alteration mineral associated with Au-bearing sulfides and calcite. Electron microprobe data of the sericite show that its muscovite component is high in all analyzed flakes (XMs= an average 0.89) and the phengite content (Mg+Fe a.p.f.u.) varies from 0.10 to 0.55 and from 0.13 to 0.29 in wallrocks and mineralized veins respectively. Carbonate occurs either as thin veinlets or disseminated grains in the mineralized quartz vein and/or the wallrocks. It has higher amount of calcite (CaCO3) and low amount of MgCO3 as well as FeCO3 in the wallrocks relative to the quartz veins. Chlorite flakes are associated with arsenopyrite and their electron probe data revealed that they are generally Fe-rich composition (FeOt 20.64–20.10 wt.%) and their composition is clinochlore either pycnochlorite or ripidolite with Al (iv) = 2.30-2.36 pfu and 2.41-2.51 pfu and with narrow range of estimated formation temperatures are (289–295°C) and (301-312°C) for pycnochlorite and ripidolite respectively. Albite is accompanied with chlorite with an Ab content is high in all analyzed samples (Ab= 95.08-99.20).

Keywords: micro-analytical data, mineral chemistry, EMPA, Atud gold deposit, Egypt

Procedia PDF Downloads 324
793 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 73
792 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets

Authors: Debjit Ray

Abstract:

Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.

Keywords: genomics, pathogens, genome assembly, superbugs

Procedia PDF Downloads 196
791 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis

Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli

Abstract:

This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.

Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE

Procedia PDF Downloads 105
790 The Epidemiology of Dengue in Taiwan during 2014-15: A Descriptive Analysis of the Severe Outbreaks of Central Surveillance System Data

Authors: Chu-Tzu Chen, Angela S. Huang, Yu-Min Chou, Chin-Hui Yang

Abstract:

Dengue is a major public health concern throughout tropical and sub-tropical regions. Taiwan is located in the Pacific Ocean and overlying the tropical and subtropical zones. The island remains humid throughout the year and receives abundant rainfall, and the temperature is very hot in summer at southern Taiwan. It is ideal for the growth of dengue vectors and would be increasing the risk on dengue outbreaks. During the first half of the 20th century, there were three island-wide dengue outbreaks (1915, 1931, and 1942). After almost forty years of dormancy, a DEN-2 outbreak occurred in Liuchiu Township, Pingtung County in 1981. Thereafter, more dengue outbreaks occurred with different scales in southern Taiwan. However, there were more than ten thousands of dengue cases in 2014 and in 2015. It did not only affect human health, but also caused widespread social disruption and economic losses. The study would like to reveal the epidemiology of dengue on Taiwan, especially the severe outbreak in 2015, and try to find the effective interventions in dengue control including dengue vaccine development for the elderly. Methods: The study applied the Notifiable Diseases Surveillance System database of the Taiwan Centers for Disease Control as data source. All cases were reported with the uniform case definition and confirmed by NS1 rapid diagnosis/laboratory diagnosis. Results: In 2014, Taiwan experienced a serious DEN-1 outbreak with 15,492 locally-acquired cases, including 136 cases of dengue hemorrhagic fever (DHF) which caused 21 deaths. However, a more serious DEN-2 outbreak occurred with 43,419 locally-acquired cases in 2015. The epidemic occurred mainly at Tainan City (22,760 cases) and Kaohsiung City (19,723 cases) in southern Taiwan. The age distribution for the cases were mainly adults. There were 228 deaths due to dengue infection, and the case fatality rate was 5.25 ‰. The average age of them was 73.66 years (range 29-96) and 86.84% of them were older than 60 years. Most of them were comorbidities. To review the clinical manifestations of the 228 death cases, 38.16% (N=87) of them were reported with warning signs, while 51.75% (N=118) were reported without warning signs. Among the 87 death cases reported to dengue with warning signs, 89.53% were diagnosed sever dengue and 84% needed the intensive care. Conclusion: The year 2015 was characterized by large dengue outbreaks worldwide. The risk of serious dengue outbreak may increase significantly in the future, and the elderly is the vulnerable group in Taiwan. However, a dengue vaccine has been licensed for use in people 9-45 years of age living in endemic settings at the end of 2015. In addition to carry out the research to find out new interventions in dengue control, developing the dengue vaccine for the elderly is very important to prevent severe dengue and deaths.

Keywords: case fatality rate, dengue, dengue vaccine, the elderly

Procedia PDF Downloads 280
789 Intrinsic Contradictions in Entrepreneurship Development and Self-Development

Authors: Revaz Gvelesiani

Abstract:

The problem of compliance between the state economic policy and entrepreneurial policy of businesses is primarily manifested in the contradictions related to the congruence between entrepreneurship development and self-development strategies. Among various types (financial, monetary, social, etc.) of the state economic policy aiming at the development of entrepreneurship, economic order policy is of special importance. Its goal is to set the framework for both public and private economic activities and achieve coherence between the societal value system and the formation of the economic order framework. Economic order policy, in its turn, involves intrinsic contradiction between the social and the competitive order. Competitive order is oriented on the principle of success, while social order _ on the criteria of need satisfaction, which contradicts, at least partly, to the principles of success. Thus within the economic order policy, on the one hand, the state makes efforts to form social order and expand its frontiers, while, on the other hand, market is determined to establish functioning competitive order and ensure its realization. Locating the adequate spaces for and setting the rational border between the state (social order) and the private (competitive order) activities, represents the phenomenon of the decisive importance from the entrepreneurship development strategy standpoint. In the countries where the above mentioned spaces and borders are “set” correctly, entrepreneurship agents (small, medium-sized and large businesses) achieve great success by means of seizing the respective segments and maintaining the leading positions in the internal, the European and the world markets for a long time. As for the entrepreneurship self-development strategy, above all, it involves: •market identification; •interactions with consumers; •continuous innovations; •competition strategy; •relationships with partners; •new management philosophy, etc. The analysis of compliance between the entrepreneurship strategy and entrepreneurship culture should be the reference point for any kind of internationalization in order to avoid shocks of cultural nature and the economic backwardness. Stabilization can be achieved only when the employee actions reflect the existing culture and the new contents of culture (targeted culture) is turned into the implicit consciousness of the personnel. The future leaders should learn how to manage different cultures. Entrepreneurship can be managed successfully if its strategy and culture are coherent. However, not rarely enterprises (organizations) show various forms of violation of both personal and team actions. If personal and team non-observances appear as the form of influence upon the culture, it will lead to global destruction of the system and structure. This is the entrepreneurship culture pathology that complicates to achieve compliance between the entrepreneurship strategy and entrepreneurship culture. Thus, the intrinsic contradictions of entrepreneurship development and self-development strategies complicate the task of reaching compliance between the state economic policy and the company entrepreneurship policy: on the one hand, there is a contradiction between the social and the competitive order within economic order policy and on the other hand, the contradiction exists between entrepreneurship strategy and entrepreneurship culture within entrepreneurship policy.

Keywords: economic order policy, entrepreneurship, development contradictions, self-development contradictions

Procedia PDF Downloads 328
788 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 322
787 Biotechnological Methods for the Grouting of the Tunneling Space

Authors: V. Ivanov, J. Chu, V. Stabnikov

Abstract:

Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.

Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space

Procedia PDF Downloads 207
786 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 235
785 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 168
784 Harvesting Value-added Products Through Anodic Electrocatalytic Upgrading Intermediate Compounds Utilizing Biomass to Accelerating Hydrogen Evolution

Authors: Mehran Nozari-Asbemarz, Italo Pisano, Simin Arshi, Edmond Magner, James J. Leahy

Abstract:

Integrating electrolytic synthesis with renewable energy makes it feasible to address urgent environmental and energy challenges. Conventional water electrolyzers concurrently produce H₂ and O₂, demanding additional procedures in gas separation to prevent contamination of H₂ with O₂. Moreover, the oxygen evolution reaction (OER), which is sluggish and has a low overall energy conversion efficiency, does not deliver a significant value product on the electrode surface. Compared to conventional water electrolysis, integrating electrolytic hydrogen generation from water with thermodynamically more advantageous aqueous organic oxidation processes can increase energy conversion efficiency and create value-added compounds instead of oxygen at the anode. One strategy is to use renewable and sustainable carbon sources from biomass, which has a large annual production capacity and presents a significant opportunity to supplement carbon sourced from fossil fuels. Numerous catalytic techniques have been researched in order to utilize biomass economically. Because of its safe operating conditions, excellent energy efficiency, and reasonable control over production rate and selectivity using electrochemical parameters, electrocatalytic upgrading stands out as an appealing choice among the numerous biomass refinery technologies. Therefore, we propose a broad framework for coupling H2 generation from water splitting with oxidative biomass upgrading processes. Four representative biomass targets were considered for oxidative upgrading that used a hierarchically porous CoFe-MOF/LDH @ Graphite Paper bifunctional electrocatalyst, including glucose, ethanol, benzyl, furfural, and 5-hydroxymethylfurfural (HMF). The potential required to support 50 mA cm-2 is considerably lower than (~ 380 mV) the potential for OER. All four compounds can be oxidized to yield liquid byproducts with economic benefit. The electrocatalytic oxidation of glucose to the value-added products, gluconic acid, glucuronic acid, and glucaric acid, was examined in detail. The cell potential for combined H₂ production and glucose oxidation was substantially lower than for water splitting (1.44 V(RHE) vs. 1.82 V(RHE) for 50 mA cm-2). In contrast, the oxidation byproduct at the anode was significantly more valuable than O₂, taking advantage of the more favorable glucose oxidation in comparison to the OER. Overall, such a combination of HER and oxidative biomass valorization using electrocatalysts prevents the production of potentially explosive H₂/O₂mixtures and produces high-value products at both electrodes with lower voltage input, thereby increasing the efficiency and activity of electrocatalytic conversion.

Keywords: biomass, electrocatalytic, glucose oxidation, hydrogen evolution

Procedia PDF Downloads 93
783 Challenges in Environmental Governance: A Case Study of Risk Perceptions of Environmental Agencies Involved in Flood Management in the Hawkesbury-Nepean Region, Australia

Authors: S. Masud, J. Merson, D. F. Robinson

Abstract:

The management of environmental resources requires engagement of a range of stakeholders including public/private agencies and different community groups to implement sustainable conservation practices. The challenge which is often ignored is the analysis of agencies involved and their power relations. One of the barriers identified is the difference in risk perceptions among the agencies involved that leads to disjointed efforts of assessing and managing risks. Wood et al 2012, explains that it is important to have an integrated approach to risk management where decision makers address stakeholder perspectives. This is critical for an effective risk management policy. This abstract is part of a PhD research that looks into barriers to flood management under a changing climate and intends to identify bottlenecks that create maladaptation. Experiences are drawn from international practices in the UK and examined in the context of Australia through exploring the flood governance in a highly flood-prone region in Australia: the Hawkesbury Ne-pean catchment as a case study. In this research study several aspects of governance and management are explored: (i) the complexities created by the way different agencies are involved in assessing flood risks (ii) different perceptions on acceptable flood risk level; (iii) perceptions on community engagement in defining acceptable flood risk level; (iv) Views on a holistic flood risk management approach; and, (v) challenges of centralised information system. The study concludes that the complexity of managing a large catchment is exacerbated by the difference in the way professionals perceive the problem. This has led to: (a) different standards for acceptable risks; (b) inconsistent attempt to set-up a regional scale flood management plan beyond the jurisdictional boundaries: (c) absence of a regional scale agency with license to share and update information (d) Lack of forums for dialogue with insurance companies to ensure an integrated approach to flood management. The research takes the Hawkesbury-Nepean catchment as case example and draws from literary evidence from around the world. In addition, conclusions were extrapolated from eighteen semi-structured interviews from agencies involved in flood risk management in the Hawkesbury-Nepean catchment of NSW, Australia. The outcome of this research is to provide a better understanding of complexity in assessing risks against a rapidly changing climate and contribute towards developing effective risk communication strategies thus enabling better management of floods and achieving increased level of support from insurance companies, real-estate agencies, state and regional risk managers and the affected communities.

Keywords: adaptive governance, flood management, flood risk communication, stakeholder risk perceptions

Procedia PDF Downloads 285
782 Strategies of Translation: Unlocking the Secret of 'Locksley Hall'

Authors: Raja Lahiani

Abstract:

'Locksley Hall' is a poem that Lord Alfred Tennyson (1809-1892) published in 1842. It is believed to be his first attempt to face as a poet some of the most painful of his experiences, as it is a study of his rising out of sickness into health, conquering his selfish sorrow by faith and hope. So far, in Victorian scholarship as in modern criticism, 'Locksley Hall' has been studied and approached as a canonical Victorian English poem. The aim of this project is to prove that some strategies of translation were used in this poem in such a way as to guarantee its assimilation into the English canon and hence efface to a large extent its Arabic roots. In its relationship with its source text, 'Locksley Hall' is at the same time mimetic and imitative. As part of the terminology used in translation studies, ‘imitation’ means almost the exact opposite of what it means in ordinary English. By adopting an imitative procedure, a translator would do something totally different from the original author, wandering far and freely from the words and sense of the original text. An imitation is thus aimed at an audience which wants the work of the particular translator rather than the work of the original poet. Hallam Tennyson, the poet’s biographer, asserts that 'Locksley Hall' is a simple invention of place, incidents, and people, though he notes that he remembers the poet claiming that Sir William Jones’ prose translation of the Mu‘allaqat (pre-Islamic poems) gave him the idea of the poem. A comparative work would prove that 'Locksley Hall' mirrors a great deal of Tennyson’s biography and hence is not a simple invention of details as asserted by his biographer. It would be challenging to prove that 'Locksley Hall' shares so many details with the Mu‘allaqat, as declared by Tennyson himself, that it needs to be studied as an imitation of the Mu‘allaqat of Imru’ al-Qays and ‘Antara in addition to its being a poem in its own right. Thus, the main aim of this work is to unveil the imitative and mimetic strategies used by Tennyson in his composition of 'Locksley Hall.' It is equally important that this project researches the acculturating assimilative tools used by the poet to root his poem in its Victorian English literary, cultural and spatiotemporal settings. This work adopts a comparative methodology. Comparison is done at different levels. The poem will be contextualized in its Victorian English literary framework. Alien details related to structure, socio-spatial setting, imagery and sound effects shall be compared to Arabic poems from the Mu‘allaqat collection. This would determine whether the poem is a translation, an adaption, an imitation or a genuine work. The ultimate objective of the project is to unveil in this canonical poem a new dimension that has for long been either marginalized or ignored. By proving that 'Locksley Hall' is an imitation of classical Arabic poetry, the project aspires to consolidate its literary value and open up new gates of accessing it.

Keywords: comparative literature, imitation, Locksley Hall, Lord Alfred Tennyson, translation, Victorian poetry

Procedia PDF Downloads 199
781 Importance of Different Spatial Parameters in Water Quality Analysis within Intensive Agricultural Area

Authors: Marina Bubalo, Davor Romić, Stjepan Husnjak, Helena Bakić

Abstract:

Even though European Council Directive 91/676/EEC known as Nitrates Directive was adopted in 1991, the issue of water quality preservation in areas of intensive agricultural production still persist all over Europe. High nitrate nitrogen concentrations in surface and groundwater originating from diffuse sources are one of the most important environmental problems in modern intensive agriculture. The fate of nitrogen in soil, surface and groundwater in agricultural area is mostly affected by anthropogenic activity (i.e. agricultural practice) and hydrological and climatological conditions. The aim of this study was to identify impact of land use, soil type, soil vulnerability to pollutant percolation, and natural aquifer vulnerability to nitrate occurrence in surface and groundwater within an intensive agricultural area. The study was set in Varaždin County (northern Croatia), which is under significant influence of the large rivers Drava and Mura and due to that entire area is dominated by alluvial soil with shallow active profile mainly on gravel base. Negative agricultural impact on water quality in this area is evident therefore the half of selected county is a part of delineated nitrate vulnerable zones (NVZ). Data on water quality were collected from 7 surface and 8 groundwater monitoring stations in the County. Also, recent study of the area implied detailed inventory of agricultural production and fertilizers use with the aim to produce new agricultural land use database as one of dominant parameters. The analysis of this database done using ArcGIS 10.1 showed that 52,7% of total County area is agricultural land and 59,2% of agricultural land is used for intensive agricultural production. On the other hand, 56% of soil within the county is classified as soil vulnerable to pollutant percolation. The situation is similar with natural aquifer vulnerability; northern part of the county ranges from high to very high aquifer vulnerability. Statistical analysis of water quality data is done using SPSS 13.0. Cluster analysis group both surface and groundwater stations in two groups according to nitrate nitrogen concentrations. Mean nitrate nitrogen concentration in surface water – group 1 ranges from 4,2 to 5,5 mg/l and in surface water – group 2 from 24 to 42 mg/l. The results are similar, but evidently higher, in groundwater samples; mean nitrate nitrogen concentration in group 1 ranges from 3,9 to 17 mg/l and in group 2 from 36 to 96 mg/l. ANOVA analysis confirmed statistical significance between stations that are classified in the same group. The previously listed parameters (land use, soil type, etc.) were used in factorial correspondence analysis (FCA) to detect importance of each stated parameter in local water quality. Since stated parameters mostly cannot be altered, there is obvious necessity for more precise and more adapted land management in such conditions.

Keywords: agricultural area, nitrate, factorial correspondence analysis, water quality

Procedia PDF Downloads 258
780 Photochemical Behaviour of Carbamazepine in Natural Waters

Authors: Fanny Desbiolles, Laure Malleret, Isabelle Laffont-Schwob, Christophe Tiliacos, Anne Piram, Mohamed Sarakha, Pascal Wong-Wah-Chung

Abstract:

Pharmaceuticals in the environment have become a very hot topic in the recent years. This interest is related to the large amounts dispensed and to their release in urine or faeces from treated patients, resulting in their ubiquitous presence in water resources and wastewater treatment plants (WWTP) effluents. Thereby, many studies focused on the prediction of pharmaceuticals’ behaviour, to assess their fate and impacts in the environment. Carbamazepine is a widely consumed psychotropic pharmaceutical, thus being one of the most commonly detected drugs in the environment. This organic pollutant was proved to be persistent, especially with respect to its non-biodegradability, rendering it recalcitrant to usual biological treatment processes. Consequently, carbamazepine is very little removed in WWTP with a maximum abatement rate of 5 % and is then often released in natural surface waters. To better assess the environmental fate of carbamazepine in aqueous media, its photochemical transformation was undertaken in four natural waters (two French rivers, the Berre salt lagoon, Mediterranean Sea water) representative of coastal and inland water types. Kinetic experiments were performed in the presence of light using simulated solar irradiation (Xe lamp 300W). Formation of short-lifetime species was highlighted using chemical trap and laser flash photolysis (nanosecond). Identification of transformation by-products was assessed by LC-QToF-MS analyses. Carbamazepine degradation was observed after a four-day exposure and an abatement of 20% maximum was measured yielding to the formation of many by-products. Moreover, the formation of hydroxyl radicals (•OH) was evidenced in waters using terephthalic acid as a probe, considering the photochemical instability of its specific hydroxylated derivative. Correlations were implemented using carbamazepine degradation rate, estimated hydroxyl radical formation and chemical contents of waters. In addition, laser flash photolysis studies confirmed •OH formation and allowed to evidence other reactive species, such as chloride (Cl2•-)/bromine (Br2•-) and carbonate (CO3•-) radicals in natural waters. Radicals mainly originate from dissolved phase and their occurrence and abundance depend on the type of water. Rate constants between reactive species and carbamazepine were determined by laser flash photolysis and competitive reactions experiments. Moreover, LC-QToF-MS analyses of by-products help us to propose mechanistic pathways. The results will bring insights to the fate of carbamazepine in various water types and could help to evaluate more precisely potential ecotoxicological effects.

Keywords: carbamazepine, kinetic and mechanistic approaches, natural waters, photodegradation

Procedia PDF Downloads 377
779 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 222
778 Investigation of the Usability of Biochars Obtained from Olive Pomace and Smashed Olive Seeds as Additives for Bituminous Binders

Authors: Muhammed Ertugrul Celoglu, Beyza Furtana, Mehmet Yilmaz, Baha Vural Kok

Abstract:

Biomass, which is considered to be one of the largest renewable energy sources in the world, has a potential to be utilized as a bitumen additive after it is processed by a wide variety of thermochemical methods. Furthermore, biomasses are renewable in short amounts of time, and they possess a hydrocarbon structure. These characteristics of biomass promote their usability as additives. One of the most common ways to create materials with significant economic values from biomasses is the processes of pyrolysis. Pyrolysis is defined as the process of an organic matter’s thermochemical degradation (carbonization) at a high temperature and in an anaerobic environment. The resultant liquid substance at the end of the pyrolysis is defined as bio-oil, whereas the resultant solid substance is defined as biochar. Olive pomace is the resultant mildly oily pulp with seeds after olive is pressed and its oil is extracted. It is a significant source of biomass as the waste of olive oil factories. Because olive pomace is waste material, it could create problems just as other waste unless there are appropriate and acceptable areas of utilization. The waste material, which is generated in large amounts, is generally used as fuel and fertilizer. Generally, additive materials are used in order to improve the properties of bituminous binders, and these are usually expensive materials, which are produced chemically. The aim of this study is to investigate the usability of biochars obtained after subjecting olive pomace and smashed olive seeds, which are considered as waste materials, to pyrolysis as additives in bitumen modification. In this way, various ways of use will be provided for waste material, providing both economic and environmental benefits. In this study, olive pomace and smashed olive seeds were used as sources of biomass. Initially, both materials were ground and processed through a No.50 sieve. Both of the sieved materials were subjected to pyrolysis (carbonization) at 400 ℃. Following the process of pyrolysis, bio-oil and biochar were obtained. The obtained biochars were added to B160/220 grade pure bitumen at 10% and 15% rates and modified bitumens were obtained by mixing them in high shear mixtures at 180 ℃ for 1 hour at 2000 rpm. Pure bitumen and four different types of bitumen obtained as a result of the modifications were tested with penetration, softening point, rotational viscometer, and dynamic shear rheometer, evaluating the effects of additives and the ratios of additives. According to the test results obtained, both biochar modifications at both ratios provided improvements in the performance of pure bitumen. In the comparison of the test results of the binders modified with the biochars of olive pomace and smashed olive seed, it was revealed that there was no notable difference in their performances.

Keywords: bituminous binders, biochar, biomass, olive pomace, pomace, pyrolysis

Procedia PDF Downloads 131
777 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 255
776 Queer Anti-Urbanism: An Exploration of Queer Space Through Design

Authors: William Creighton, Jan Smitheram

Abstract:

Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?

Keywords: queer, queer anti-urbanism, design as research, design

Procedia PDF Downloads 175
775 Development Project, Land Acquisition and Rehabilitation: A Study of Navi Mumbai International Airport Project, India

Authors: Rahul Rajak, Archana Kumari Roy

Abstract:

Purpose: Development brings about structural change in the society. It is essential for socio-economic progress of the society, but it also causes pain to the people who are forced to displace from their motherland. Most of the people who are displaced due to development are poor and tribes. Development and displacement are interlinked with each other in the sense development sometimes leads to displacement of people. These studies mainly focus on socio-economic profile of villages and villagers likely to be affected by the Airport Project and they examine the issues of compensation and people’s level of satisfaction. Methodology: The study is based on Descriptive design; it is basically observational and correlation study. Primary data is used in this study. Considering the time and resource constrains, 100 people were interviewed covering socio-economic and demographic diversities from 6 out of 10 affected villages. Due to Navi Mumbai International Airport Project ten villages have to be displaced. Out of ten villages, this study is based on only six villages. These are Ulwe, Ganeshpuri, Targhar Komberbuje, Chincpada and Kopar. All six villages situated in Raigarh district under the Taluka Panvel in Maharashtra. Findings: It is revealed from the survey that there are three main castes of affected villages that are Agri, Koli, and Kradi. Entire village population of migrated person is very negligible. All three caste have main occupation are agricultural and fishing activities. People’s perception revealed that due to the establishment of the airport project, they may have more opportunities and scope of development rather than the adverse effect, but vigorously leave a motherland is psychological effect of the villagers. Research Limitation: This study is based on only six villages, the scenario of the entire ten affected villages is not explained by this research. Practical implication: The scenario of displacement and resettlement signifies more than a mere physical relocation. Compensation is not only hope for villagers, is it only give short time relief. There is a need to evolve institutions to protect and strengthen the right of Individuals. The development induced displacement exposed them to a new reality, the reality of their legality and illegality of stay on the land which belongs to the state. Originality: Mumbai has large population and high industrialized city have put land at the center of any policy implication. This paper demonstrates through the actual picture gathered from the field that how seriously the affected people suffered and are still suffering because of the land acquisition for the Navi Mumbai International Airport Project. The whole picture arise the question which is how long the government can deny the rights to farmers and agricultural laborers and remain unwilling to establish the balance between democracy and development.

Keywords: compensation, displacement, land acquisition, project affected person (PAPs), rehabilitation

Procedia PDF Downloads 313
774 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 356
773 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 32