Search results for: minimum root mean square (RMS) error matching algorithm
573 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India
Authors: Susmita Ghosh
Abstract:
Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management
Procedia PDF Downloads 201572 Modified 'Perturb and Observe' with 'Incremental Conductance' Algorithm for Maximum Power Point Tracking
Authors: H. Fuad Usman, M. Rafay Khan Sial, Shahzaib Hamid
Abstract:
The trend of renewable energy resources has been amplified due to global warming and other environmental related complications in the 21st century. Recent research has very much emphasized on the generation of electrical power through renewable resources like solar, wind, hydro, geothermal, etc. The use of the photovoltaic cell has become very public as it is very useful for the domestic and commercial purpose overall the world. Although a single cell gives the low voltage output but connecting a number of cells in a series formed a complete module of the photovoltaic cells, it is becoming a financial investment as the use of it fetching popular. This also reduced the prices of the photovoltaic cell which gives the customers a confident of using this source for their electrical use. Photovoltaic cell gives the MPPT at single specific point of operation at a given temperature and level of solar intensity received at a given surface whereas the focal point changes over a large range depending upon the manufacturing factor, temperature conditions, intensity for insolation, instantaneous conditions for shading and aging factor for the photovoltaic cells. Two improved algorithms have been proposed in this article for the MPPT. The widely used algorithms are the ‘Incremental Conductance’ and ‘Perturb and Observe’ algorithms. To extract the maximum power from the source to the load, the duty cycle of the convertor will be effectively controlled. After assessing the previous techniques, this paper presents the improved and reformed idea of harvesting maximum power point from the photovoltaic cells. A thoroughly go through of the previous ideas has been observed before constructing the improvement in the traditional technique of MPP. Each technique has its own importance and boundaries at various weather conditions. An improved technique of implementing the use of both ‘Perturb and Observe’ and ‘Incremental Conductance’ is introduced.Keywords: duty cycle, MPPT (Maximum Power Point Tracking), perturb and observe (P&O), photovoltaic module
Procedia PDF Downloads 176571 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 137570 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 154569 Security of Database Using Chaotic Systems
Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem
Abstract:
Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST
Procedia PDF Downloads 265568 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores
Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi
Abstract:
In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.Keywords: drug synergy, clustering, prediction, machine learning., deep learning
Procedia PDF Downloads 79567 Implementation of Ecological and Energy-Efficient Building Concepts
Authors: Robert Wimmer, Soeren Eikemeier, Michael Berger, Anita Preisler
Abstract:
A relatively large percentage of energy and resource consumption occurs in the building sector. This concerns the production of building materials, the construction of buildings and also the energy consumption during the use phase. Therefore, the overall objective of this EU LIFE project “LIFE Cycle Habitation” (LIFE13 ENV/AT/000741) is to demonstrate innovative building concepts that significantly reduce CO₂emissions, mitigate climate change and contain a minimum of grey energy over their entire life cycle. The project is being realised with the contribution of the LIFE financial instrument of the European Union. The ultimate goal is to design and build prototypes for carbon-neutral and “LIFE cycle”-oriented residential buildings and make energy-efficient settlements the standard of tomorrow in line with the EU 2020 objectives. To this end, a resource and energy-efficient building compound is being built in Böheimkirchen, Lower Austria, which includes 6 living units and a community area as well as 2 single family houses with a total usable floor surface of approximately 740 m². Different innovative straw bale construction types (load bearing and pre-fabricated non loadbearing modules) together with a highly innovative energy-supply system, which is based on the maximum use of thermal energy for thermal energy services, are going to be implemented. Therefore only renewable resources and alternative energies are used to generate thermal as well as electrical energy. This includes the use of solar energy for space heating, hot water and household appliances like dishwasher or washing machine, but also a cooking place for the community area operated with thermal oil as heat transfer medium on a higher temperature level. Solar collectors in combination with a biomass cogeneration unit and photovoltaic panels are used to provide thermal and electric energy for the living units according to the seasonal demand. The building concepts are optimised by support of dynamic simulations. A particular focus is on the production and use of modular prefabricated components and building parts made of regionally available, highly energy-efficient, CO₂-storing renewable materials like straw bales. The building components will be produced in collaboration by local SMEs that are organised in an efficient way. The whole building process and results are monitored and prepared for knowledge transfer and dissemination including a trial living in the residential units to test and monitor the energy supply system and to involve stakeholders into evaluation and dissemination of the applied technologies and building concepts. The realised building concepts should then be used as templates for a further modular extension of the settlement in a second phase.Keywords: energy-efficiency, green architecture, renewable resources, sustainable building
Procedia PDF Downloads 149566 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 104565 Awareness and Willingness of Signing 'Consent Form in Palliative Care' in Elderly Patients with End Stage Renal Disease
Authors: Hsueh Ping Peng
Abstract:
End-stage renal disease most commonly occurs in the elderly population. Elderly people are approaching the end of their lives, and when facing major life-threatening situations, apart from aggressive medical treatment, they can also choose treatment methods such as hospice care to improve their quality of life. The purpose of this study was to investigate factors associated with the awareness and willingness to sign hospice and palliative care consent forms in elderly with end-stage renal disease. This study used both quantitative, cross-sectional study designs. In the quantitative section, 110 elderly patients (aged 65 or above) with end-stage renal disease receiving conventional hemodialysis were recruited as study participants from a medical center in Taipei City. Data were collected using structured questionnaires. Study tools included basic demographic data, questionnaires on the awareness and perception of hospice and palliative care, etc. After collecting the data, data analysis was conducted using SPSS 20.0 statistical software, including descriptive statistics, chi-square test, logistic regression, and other inferential statistics. The results showed that the average age of participants was 71.6 years old, more males than females, average years of dialysis was 6.1 years and most subjects rated their self-perceived health status as fair. Results of the study are summarized as follows: Elderly people with end-stage renal disease did not have sufficient knowledge and awareness about hospice and palliative care. Influencing factors included level of education, marital status, years of dialysis and age, etc. Demographic factors influencing the signing of consent forms included gender, marital status, and age, which all showed significant impacts. Factors taken into consideration when signing consent forms included awareness of hospice care, understanding the relevant definitions of hospice care, and understanding that consent may be modified or cancelled at any time; it was predicted that people who knew more about ways to receive hospice care or more related definitions were more willing to sign the consent forms. In the qualitative study section, 10 participants who signed the consent form, five male, and 5 female, between the ages of 65-90, have completed the semi-structured interviews. Analysis of the interviews revealed six themes: (1) passing away peacefully, (2) autonomy on arrangements of life and death, (3) unwillingness to increase family and social burden, (4) friends and relatives’ experience influencing the decision to give consent, (5) sharing information to facilitate the giving of consent, (6) facing each day with ease, to reflect the experience and factors of consideration for elderly with end-stage renal disease when signing consent forms. The results of this study provides the awareness, thoughts and feelings of elderly with end-stage renal disease on signing consent forms, and serve as a future reference for the dialysis unit to enhance the promotion of hospice and palliative care and related caregiving measures, thereby improving the quality of life and care for elderly people with end-stage renal disease.Keywords: end-stage renal disease, hemodialysis, hospice and palliative care, awareness, willingness
Procedia PDF Downloads 168564 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens
Authors: Cynthia Adlerstein
Abstract:
Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher
Procedia PDF Downloads 357563 Upgrading of Bio-Oil by Bio-Pd Catalyst
Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood
Abstract:
This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.Keywords: bio-oil, catalyst, palladium, upgrading
Procedia PDF Downloads 175562 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint
Authors: Juliane Spaak
Abstract:
A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient
Procedia PDF Downloads 73561 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts
Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris
Abstract:
The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides
Procedia PDF Downloads 281560 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans
Authors: Jian Wu
Abstract:
Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory
Procedia PDF Downloads 186559 Different Types of Amyloidosis Revealed with Positive Cardiac Scintigraphy with Tc-99M DPD-SPECT
Authors: Ioannis Panagiotopoulos, Efstathios Kastritis, Anastasia Katinioti, Georgios Efthymiadis, Argyrios Doumas, Maria Koutelou
Abstract:
Introduction: Transthyretin amyloidosis (ATTR) is a rare but serious infiltrative disease. Myocardial scintigraphy with DPD has emerged as the most effective, non-invasive, highly sensitive, and highly specific diagnostic method for cardiac ATTR amyloidosis. However, there are cases in which additional laboratory investigations reveal AL amyloidosis or other diseases despite a positive DPD scintigraphy. We describe the experience from the Onassis Cardiac Surgery Center and the monitoring center for infiltrative myocardial diseases of the cardiology clinic at AHEPA. Materials and Methods: All patients with clinical suspicion of cardiac or extracardiac amyloidosis undergo a myocardial scintigraphy scan with Tc-99m DPD. In this way, over 500 patients have been examined. Further diagnostic approach based on clinical and imaging findings includes laboratory investigation and invasive techniques (e.g., biopsy). Results: Out of 76 patients in total with positive myocardial scintigraphy Grade 2 or 3 according to the Perugini scale, 8 were proven to suffer from AL Amyloidosis during the investigation of paraproteinemia. Among these patients, 3 showed Grade 3 uptake, while the rest were graded as Grade 2, or 2 to 3. Additionally, one patient presented diffuse and unusual radiopharmaceutical uptake in soft tissues throughout the body without cardiac involvement. These findings raised suspicions, leading to the analysis of κ and λ light chains in the serum, as well as immunostaining of proteins in the serum and urine of these specific patients. The final diagnosis was AL amyloidosis. Conclusion: The value of DPD scintigraphy in the diagnosis of cardiac amyloidosis from transthyretin is undisputed. However, positive myocardial scintigraphy with DPD should not automatically lead to the diagnosis of ATTR amyloidosis. Laboratory differentiation between ATTR and AL amyloidosis is crucial, as both prognosis and therapeutic strategy are dramatically altered. Laboratory exclusion of paraproteinemia is a necessary and essential step in the diagnostic algorithm of ATTR amyloidosis for all positive myocardial scintigraphy with diphosphonate tracers since >20% of patients with Grade 3 and 2 uptake may conceal AL amyloidosis.Keywords: AL amyloidosis, amyloidosis, ATTR, myocardial scintigraphy, Tc-99m DPD
Procedia PDF Downloads 81558 Analysis of NMDA Receptor 2B Subunit Gene (GRIN2B) mRNA Expression in the Peripheral Blood Mononuclear Cells of Alzheimer's Disease Patients
Authors: Ali̇ Bayram, Semih Dalkilic, Remzi Yigiter
Abstract:
N-methyl-D-aspartate (NMDA) receptor is a subtype of glutamate receptor and plays a pivotal role in learning, memory, neuronal plasticity, neurotoxicity and synaptic mechanisms. Animal experiments were suggested that glutamate-induced excitotoxic injuriy and NMDA receptor blockage lead to amnesia and other neurodegenerative diseases including Alzheimer’s disease (AD), Huntington’s disease, amyotrophic lateral sclerosis. Aim of this study is to investigate association between NMDA receptor coding gene GRIN2B expression level and Alzheimer disease. The study was approved by the local ethics committees, and it was conducted according to the principles of the Declaration of Helsinki and guidelines for the Good Clinical Practice. Peripheral blood was collected 50 patients who diagnosed AD and 49 healthy control individuals. Total RNA was isolated with RNeasy midi kit (Qiagen) according to manufacturer’s instructions. After checked RNA quality and quantity with spectrophotometer, GRIN2B expression levels were detected by quantitative real time PCR (QRT-PCR). Statistical analyses were performed, variance between two groups were compared with Mann Whitney U test in GraphpadInstat algorithm with 95 % confidence interval and p < 0.05. After statistical analyses, we have determined that GRIN2B expression levels were down regulated in AD patients group with respect to control group. But expression level of this gene in each group was showed high variability. İn this study, we have determined that NMDA receptor coding gene GRIN2B expression level was down regulated in AD patients when compared with healthy control individuals. According to our results, we have speculated that GRIN2B expression level was associated with AD. But it is necessary to validate these results with bigger sample size.Keywords: Alzheimer’s disease, N-methyl-d-aspartate receptor, NR2B, GRIN2B, mRNA expression, RT-PCR
Procedia PDF Downloads 394557 Coastal Modelling Studies for Jumeirah First Beach Stabilization
Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal
Abstract:
Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix
Procedia PDF Downloads 263556 Presence and Severity of Language Deficits in Comprehension, Production and Pragmatics in a Group of ALS Patients: Analysis with Demographic and Neuropsychological Data
Authors: M. Testa, L. Peotta, S. Giusiano, B. Lazzolino, U. Manera, A. Canosa, M. Grassano, F. Palumbo, A. Bombaci, S. Cabras, F. Di Pede, L. Solero, E. Matteoni, C. Moglia, A. Calvo, A. Chio
Abstract:
Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease of adulthood, which primarily affects the central nervous system and is characterized by progressive bilateral degeneration of motor neurons. The degeneration processes in ALS extend far beyond the neurons of the motor system, and affects cognition, behaviour and language. To outline the prevalence of language deficits in an ALS cohort and explore their profile along with demographic and neuropsychological data. A full neuropsychological battery and language assessment was administered to 56 ALS patients. Neuropsychological assessment included tests of executive functioning, verbal fluency, social cognition and memory. Language was assessed using tests for verbal comprehension, production and pragmatics. Patients were cognitively classified following the Revised Consensus Criteria and divided in three groups showing different levels of language deficits: group 1 - no language deficit; group 2 - one language deficit; group 3 - two or more language deficits. Chi-square for independence and non-parametric measures to compare groups were applied. Nearly half of ALS-CN patients (48%) reported one language test under the clinical cut-off, and only 13% of patents classified as ALS-CI showed no language deficits, while the rest 87% of ALS-CI reported two or more language deficits. ALS-BI and ALS-CBI cases all reported two or more language deficits. Deficits in production and in comprehension appeared more frequent in ALS-CI patients (p=0.011, p=0.003 respectively), with a higher percentage of comprehension deficits (83%). Nearly all ALS-CI reported at least one deficit in pragmatic abilities (96%) and all ALS-BI and ALS-CBI patients showed pragmatic deficits. Males showed higher percentage of pragmatic deficits (97%, p=0.007). No significant differences in language deficits have been found between bulbar and spinal onset. Months from onset and level of impairment at testing (ALS-FRS total score) were not significantly different between levels and type of language impairment. Age and education were significantly higher for cases showing no deficits in comprehension and pragmatics and in the group showing no language deficits. Comparing performances at neuropsychological tests among the three levels of language deficits, no significant differences in neuropsychological performances were found between group 1 and 2; compared to group 1, group 3 appeared to decay specifically on executive testing, verbal/visuospatial learning, and social cognition. Compared to group 2, group 3 showed worse performances specifically in tests of working memory and attention. Language deficits have found to be spread in our sample, encompassing verbal comprehension, production and pragmatics. Our study reveals that also cognitive intact patients (ALS-CN) showed at least one language deficit in 48% of cases. Pragmatic domain is the most compromised (84% of the total sample), present in nearly all ALS-CI (96%), likely due to the influence of executive impairment. Lower age and higher education seem to preserve comprehension, pragmatics and presence of language deficits. Finally, executive functions, verbal/visuospatial learning and social cognition differentiate the group with no language deficits from the group with a clinical language impairment (group 3), while attention and working memory differentiate the group with one language deficit from the clinical impaired group.Keywords: amyotrophic lateral sclerosis, language assessment, neuropsychological assessment, language deficit
Procedia PDF Downloads 163555 Ichthyofauna and Population Status at Indus River Downstream, Sindh-Pakistan
Authors: M. K. Sheikh, Y. M. Laghari., P. K. Lashari., N. T. Narejo
Abstract:
The Indus River is one of the longest important rivers of the world in Asia that flows southward through Pakistan, merges into the Arabian Sea near the port city of Karachi in Sindh Province, and forms the Indus Delta. Fish are an important resource for humans worldwide, especially as food. In fish, healthy nutriments are present which are not found in any other meat source because it have a huge quantity of omega- 3 fatty acids, which are very essential for the human body. Ichthyologic surveys were conducted to explore the diversity of freshwater fishes, distribution, abundance and current status of the fishes at different spatial scale of the downstream, Indus River. Total eight stations were selected namely Railo Miyan (RM), Karokho (Kk), Khanpur (Kp), Mullakatiyar (Mk), Wasi Malook Shah (WMS), Branch Morie (BM), Sujawal (Sj) and Jangseer (JS). The study was carried in the period of January 2016 to December 2019 to identify River and biodiversity threats and to suggest recommendations for conservation. The data were analysed by different population diversity index. Altogether, 124 species were recorded belonging to 12 Orders and 43 Families from the downstream of Indus River. Among 124 species, 29% belong to high commercial value and 35% were trash fishes. 31% of fishes were identified as marine/estuarine origin (migratory) and 05% were exotic fish species. Perciformes is the most predominated order, contributing to 41% of families. Among 43 families, the family Cyprinidae was the richest family from all localities of downstream, represented by 24% of fish species demonstrating a significant dominance in the number of species. A significant difference was observed for species abundance in between all sites, the maximum abundance species were found at first location RM having 115 species and minimum observed at the last station JS 56 genera. In the recorded Ichthyofauna, seven groups were found according to the International Union for Conservation of Nature status (IUCN), where a high species ratio was collected, in Least Concern (LC) having 94 species, 11 were found as not evaluated (NE), whereas 8 was identified as near threatened (NT), 1 was recorded as critically endangered (CR), 11 were collected as data deficient (DD), and while 8 was observed as vulnerable (VU) and 3 endangered (EN) species. Different diversity index has been used extensively in environmental studies to estimate the species richness and abundance of ecosystems outputs of their wellness; a positive environment (biodiversity rich) with species at RM had an environmental wellness and biodiversity levels of 4.566% while a negative control environment (biodiversity poor) on last station JS had an environmental wellness and biodiversity levels of 3.931%. The status of fish biodiversity and river has been found under serious threat. Due to the lower diversity of fishes, it became not only venerable for fish but also risky for fishermen. Necessary steps are recommended to protect the biodiversity by conducting further conservative research in this area.Keywords: ichthyofaunal biodiversity, threatened species, diversity index, Indus River downstream
Procedia PDF Downloads 177554 NEOM Coast from Intertidal to Sabkha Systems: A Geological Overview
Authors: Mohamed Abouelresh, Subhajit Kumar, Lamidi Babalola, Septriandi Chan, Ali Al Musabeh A., Thadickal V. Joydas, Bruno Pulido
Abstract:
Neom has a relatively long coastline on the Red Sea and the Gulf of Aqaba, which is about 300 kilometres long, in addition to many naturally formed bays along the Red Sea coast. Undoubtedly, these coasts provide an excellent opportunity for tourism and other activities; however, these coastal areas host a wide range of salinity-dependent ecosystems that need to be protected. The main objective of the study was to identify the coastal features, including tidal flats and salt flats, along the NEOM coast. A base map of the study area generated from the satellite images contained the main landform features and, in particular, the boundaries of the inland and coastal sabkhas. A field survey was conducted to map and characterize the intertidal and sabkha landforms. The coastal and inner coastal areas of NEOM are mainly covered by the quaternary sediments, which include gravel sheets, terraces, raised reef limestone, evaporite successions, eolian dunes, and undifferentiated sand/gravel deposits (alluvium, alluvial outwash, wind-blown sand beach). There are different landforms that characterizes the NEOM coast, including rocky coast, tidal zone, and sabkha. Sabkha area ranges between a few to tens of square kilometers. Coastal sabkha extended across the shoreline of NEOM, specifically at Gayal and Sharma areas, while the continental sabkha only existed at Gayal Town. The inland Sabkha at Gayal is mainly composed of a thin (15-25 cm) evaporite crust composed of a dark brown, cavernous, rugged, pitted, colloidal salty sand layer with salt-tolerant vegetation. The inland Sabkha is considered a groundwater-driven sedimentary system as indicated by syndepositional intra-sediment capillary evaporites, which precipitate in both marine and continental salt flats. Gayal coastal Sabkha is made up of tidal inlets, tidal creeks, and lagoons followed in a landward direction with well-developed sabkha layers. The surface sediments of the coastal Sabkha are composed of unlithified calcareous, gypsiferous, coarse to medium sands, and silt with bioclastic fragments underlain by several organic-rich layers. The coastal flat is graded landward into widespread, flat vegetated Sabkhas dissected by tributaries of the fluvial system, which debouches to the Red Sea. The coast from Gayal to Magna through Ras El-Sheikh Humaid is continuously subjected to tidal flows, which create an intertidal depositional system. The intertidal flats at NEOM are extensive, nearly horizontal land forming a very dynamic system in which several physical, chemical, geomorphological, and biological processes are acting simultaneously. The current work provides a field-based identification of the coastal sabkha and intertidal sites at NEOM. However, the mutual interaction between tidal flows and sabkha development, particularly at Gayal, needs to be well understood through comprehensive field and lab analysis.Keywords: coast, intertidal, deposition, sabkha
Procedia PDF Downloads 82553 Nonconventional Method for Separation of Rosmarinic Acid: Synergic Extraction
Authors: Lenuta Kloetzer, Alexandra C. Blaga, Dan Cascaval, Alexandra Tucaliuc, Anca I. Galaction
Abstract:
Rosmarinic acid, an ester of caffeic acid and 3-(3,4-dihydroxyphenyl) lactic acid, is considered a valuable compound for the pharmaceutical and cosmetic industries due to its antimicrobial, antioxidant, antiviral, anti-allergic, and anti-inflammatory effects. It can be obtained by extraction from vegetable or animal materials, by chemical synthesis and biosynthesis. Indifferent of the method used for rosmarinic acid production, the separation and purification process implies high amount of raw materials and laborious stages leading to high cost for and limitations of the separation technology. This study focused on separation of rosmarinic acid by synergic reactive extraction with a mixture of two extractants, one acidic (acid di-(2ethylhexyl) phosphoric acid, D2EHPA) and one with basic character (Amberlite LA-2). The studies were performed in experimental equipment consisting of an extraction column where the phases’ mixing was made by mean of a perforated disk with 45 mm diameter and 20% free section, maintained at the initial contact interface between the aqueous and organic phases. The vibrations had a frequency of 50 s⁻¹ and 5 mm amplitude. The extraction was carried out in two solvents with different dielectric constants (n-heptane and dichloromethane) in which the extractants mixture of varying concentration was dissolved. The pH-value of initial aqueous solution was varied between 1 and 7. The efficiency of the studied extraction systems was quantified by distribution and synergic coefficients. For calculating these parameters, the rosmarinic acid concentration in the initial aqueous solution and in the raffinate have been measured by HPLC. The influences of extractants concentrations and solvent polarity on the efficiency of rosmarinic acid separation by synergic extraction with a mixture of Amberlite LA-2 and D2EHPA have been analyzed. In the reactive extraction system with a constant concentration of Amberlite LA-2 in the organic phase, the increase of D2EHPA concentration leads to decrease of the synergic coefficient. This is because the increase of D2EHPA concentration prevents the formation of amine adducts and, consequently, affects the hydrophobicity of the interfacial complex with rosmarinic acid. For these reasons, the diminution of synergic coefficient is more important for dichloromethane. By maintaining a constant value of D2EHPA concentration and increasing the concentration of Amberlite LA-2, the synergic coefficient could become higher than 1, its highest values being reached for n-heptane. Depending on the solvent polarity and D2EHPA amount in the solvent phase, the synergic effect is observed for Amberlite LA-2 concentrations over 20 g/l dissolved in n-heptane. Thus, by increasing the concentration of D2EHPA from 5 to 40 g/l, the minimum concentration value of Amberlite LA-2 corresponding to synergism increases from 20 to 40 g/l for the solvent with lower polarity, namely, n-heptane, while there is no synergic effect recorded for dichloromethane. By analysing the influences of the main factors (organic phase polarity, extractant concentration in the mixture) on the efficiency of synergic extraction of rosmarinic acid, the most important synergic effect was found to correspond to the extractants mixture containing 5 g/l D2EHPA and 40 g/l Amberlite LA-2 dissolved in n-heptane.Keywords: Amberlite LA-2, di(2-ethylhexyl) phosphoric acid, rosmarinic acid, synergic effect
Procedia PDF Downloads 290552 The Role of Macroeconomic Condition and Volatility in Credit Risk: An Empirical Analysis of Credit Default Swap Index Spread on Structural Models in U.S. Market during Post-Crisis Period
Authors: Xu Wang
Abstract:
This research builds linear regressions of U.S. macroeconomic condition and volatility measures in the investment grade and high yield Credit Default Swap index spreads using monthly data from March 2009 to July 2016, to study the relationship between different dimensions of macroeconomy and overall credit risk quality. The most significant contribution of this research is systematically examining individual and joint effects of macroeconomic condition and volatility on CDX spreads by including macroeconomic time series that captures different dimensions of the U.S. economy. The industrial production index growth, non-farm payroll growth, consumer price index growth, 3-month treasury rate and consumer sentiment are introduced to capture the condition of real economic activity, employment, inflation, monetary policy and risk aversion respectively. The conditional variance of the macroeconomic series is constructed using ARMA-GARCH model and is used to measure macroeconomic volatility. The linear regression model is conducted to capture relationships between monthly average CDX spreads and macroeconomic variables. The Newey–West estimator is used to control for autocorrelation and heteroskedasticity in error terms. Furthermore, the sensitivity factor analysis and standardized coefficients analysis are conducted to compare the sensitivity of CDX spreads to different macroeconomic variables and to compare relative effects of macroeconomic condition versus macroeconomic uncertainty respectively. This research shows that macroeconomic condition can have a negative effect on CDX spread while macroeconomic volatility has a positive effect on determining CDX spread. Macroeconomic condition and volatility variables can jointly explain more than 70% of the whole variation of the CDX spread. In addition, sensitivity factor analysis shows that the CDX spread is the most sensitive to Consumer Sentiment index. Finally, the standardized coefficients analysis shows that both macroeconomic condition and volatility variables are important in determining CDX spread but macroeconomic condition category of variables have more relative importance in determining CDX spread than macroeconomic volatility category of variables. This research shows that the CDX spread can reflect the individual and joint effects of macroeconomic condition and volatility, which suggests that individual investors or government should carefully regard CDX spread as a measure of overall credit risk because the CDX spread is influenced by macroeconomy. In addition, the significance of macroeconomic condition and volatility variables, such as Non-farm Payroll growth rate and Industrial Production Index growth volatility suggests that the government, should pay more attention to the overall credit quality in the market when macroecnomy is low or volatile.Keywords: autoregressive moving average model, credit spread puzzle, credit default swap spread, generalized autoregressive conditional heteroskedasticity model, macroeconomic conditions, macroeconomic uncertainty
Procedia PDF Downloads 167551 Generating Ideas to Improve Road Intersections Using Design with Intent Approach
Authors: Omar Faruqe Hamim, M. Shamsul Hoque, Rich C. McIlroy, Katherine L. Plant, Neville A. Stanton
Abstract:
Road safety has become an alarming issue, especially in low-middle income developing countries. The traditional approaches lack the out of the box thinking, making engineers confined to applying usual techniques in making roads safer. A socio-technical approach has recently been introduced in improving road intersections through designing with intent. This Design With Intent (DWI) approach aims to give practitioners a more nuanced approach to design and behavior, working with people, people’s understanding, and the complexities of everyday human experience. It's a collection of design patterns —and a design and research approach— for exploring the interactions between design and people’s behavior across products, services, and environments, both digital and physical. Through this approach, it can be seen that how designing with people in behavior change can be applied to social and environmental problems, as well as commercially. It has a total of 101 cards across eight different lenses, such as architectural, error-proofing, interaction, ludic, perceptual, cognitive, Machiavellian, and security lens each having its own distinct characteristics of extracting ideas from the participant of this approach. For this research purpose, a three-legged accident blackspot intersection of a national highway has been chosen to perform the DWI workshop. Participants from varying fields such as civil engineering, naval architecture and marine engineering, urban and regional planning, and sociology actively participated for a day long workshop. While going through the workshops, the participants were given a preamble of the accident scenario and a brief overview of DWI approach. Design cards of varying lenses were distributed among 10 participants and given an hour and a half for brainstorming and generating ideas to improve the safety of the selected intersection. After the brainstorming session, the participants spontaneously went through roundtable discussions regarding the ideas they have come up with. According to consensus of the forum, ideas were accepted or rejected. These generated ideas were then synthesized and agglomerated to bring about an improvement scheme for the intersection selected in our study. To summarize the improvement ideas from DWI approach, color coding of traffic lanes for separate vehicles, channelizing the existing bare intersection, providing advance warning traffic signs, cautionary signs and educational signs motivating road users to drive safe, using textured surfaces at approach with rumble strips before the approach of intersection were the most significant one. The motive of this approach is to bring about new ideas from the road users and not just depend on traditional schemes to increase the efficiency, safety of roads as well and to ensure the compliance of road users since these features are being generated from the minds of users themselves.Keywords: design with intent, road safety, human experience, behavior
Procedia PDF Downloads 139550 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves
Authors: Dmytro Zubov, Francesco Volponi
Abstract:
In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.Keywords: heat wave, D-wave, forecast, Ising model, quantum computing
Procedia PDF Downloads 500549 Anti-Bacterial Activity Studies of Derivatives of 6β-Hydroxy Betunolic Acid against Selected Stains of Gram (+) and Gram (-) Bacteria
Authors: S. Jayasinghe, W. G. D. Wickramasingha, V. Karunaratne, D. N. Karunaratne, A. Ekanayake
Abstract:
Multi-drug resistant microbial pathogens are a serious global health problem, and hence, there is an urgent necessity for discovering new drug therapeutics. However, finding alternatives is a one of the biggest challenges faced by the global drug industry due to the spiraling high cost and serious side effects associated with modern medicine. On the other hand, plants and their secondary metabolites can be considered as good sources of scaffolds to provide structurally diverse bioactive compounds as potential therapeutic agents. 6β-hydroxy betunolic acid is a triterpenoid isolated from bark of Schumacheria castaneifolia which is an endemic plant to Sri Lanka which has shown antibacterial activity against both Staphylococcus aureus (ATCC 29213) and methicillin-resistant S. aureus with Minimum Inhibition Concentration (MIC) of 16 µg/ml. The objective of this study was to determine the anti-bacterial activity for the derivatives of 6β- hydroxy betunolic acid against standard strains of Staphylococcus aureus (ATCC 29213 and ATCC 25923), Enterococcus faecalis (ATCC 29212), Escherichia coli (ATCC 35218 and ATCC 25922), Pseudomonas aeruginosa (ATCC 27853), carbepenemas produce Kebsiella pneumonia (ATCC BAA 1705) and carbepenemas non produce Kebsiella pneumonia (ATCC BAA 1706) and four stains of clinically isolated methicillin resistance S. aureus and Acinetobacter. Structural analogues of 6β-hydroxy betunolic acid were synthesized by modifying the carbonyl group at C-3 to obtain olefin and oxime, the hydroxyl group at C-6 position to a ketone, the carboxylic acid at C-17 to obtain amide and halo ester and the olefin group at C-20 position to obtain epoxide. Chemical structures of the synthesized analogues were confirmed with spectroscopic data and antibacterial activity was determined through broth micro dilution assay. Results revealed that 6β- hydroxy betunolic acid shows significant antibacterial activity only against the Gram positive strains and it was inactive against all the tested Gram negative strains for the tested concentration range. However, structural modifications into oxime and olefin at C-3, ketone at C-6 and epoxide at C-20 decreased its antibacterial activity against the gram positive organisms and it was totally lost with the both modifications at C-17 into amide and ester. These results concluded that the antibacterial activity of 6β- hydroxy betunolic acid and derivatives is predominantly depending on the cell wall difference of the bacteria and the presence of carboxylic acid at C-17 is highly important for the antibacterial activity against Gram positive organisms.Keywords: antibacterial activity, 6β- hydroxy betunolic acid, broth micro dilution assay, structure activity relationship
Procedia PDF Downloads 126548 Prevalence and Risk Factors of Cardiovascular Diseases among Bangladeshi Adults: Findings from a Cross Sectional Study
Authors: Fouzia Khanam, Belal Hossain, Kaosar Afsana, Mahfuzar Rahman
Abstract:
Aim: Although cardiovascular diseases (CVD) has already been recognized as a major cause of death in developed countries, its prevalence is rising in developing countries as well, and engendering a challenge for the health sector. Bangladesh has experienced an epidemiological transition from communicable to non-communicable diseases over the last few decades. So, the rising prevalence of CVD and its risk factors are imposing a major problem for the country. We aimed to examine the prevalence of CVDs and socioeconomic and lifestyle factors related to it from a population-based survey. Methods: The data used for this study were collected as a part of a large-scale cross-sectional study conducted to explore the overall health status of children, mothers and senior citizens of Bangladesh. Multistage cluster random sampling procedure was applied by considering unions as clusters and households as the primary sampling unit to select a total of 11,428 households for the base survey. Present analysis encompassed 12338 respondents of ≥ 35 years, selected from both rural areas and urban slums of the country. Socio-economic, demographic and lifestyle information were obtained through individual by a face-to-face interview which was noted in ODK platform. And height, weight, blood pressure and glycosuria were measured using standardized methods. Chi-square test, Univariate modified Poisson regression model, and multivariate modified Poisson regression model were done using STATA software (version 13.0) for analysis. Results: Overall, the prevalence of CVD was 4.51%, of which 1.78% had stroke and 3.17% suffered from heart diseases. Male had higher prevalence of stroke (2.20%) than their counterparts (1.37%). Notably, thirty percent of respondents had high blood pressure and 5% population had diabetes and more than half of the population was pre-hypertensive. Additionally, 20% were overweight, 77% were smoker or consumed smokeless tobacco and 28% of respondents were physically inactive. Eighty-two percent of respondents took extra salt while eating and 29% of respondents had deprived sleep. Furthermore, the prevalence of risk factor of CVD varied according to gender. Women had a higher prevalence of overweight, obesity and diabetes. Women were also less physically active compared to men and took more extra salt. Smoking was lower in women compared to men. Moreover, women slept less compared to their counterpart. After adjusting confounders in modified Poisson regression model, age, gender, occupation, wealth quintile, BMI, extra salt intake, daily sleep, tiredness, diabetes, and hypertension remained as risk factors for CVD. Conclusion: The prevalence of CVD is significant in Bangladesh, and there is an evidence of rising trend for its risk factors such as hypertension, diabetes especially in older population, women and high-income groups. Therefore, in this current epidemiological transition, immediate public health intervention is warranted to address the overwhelming CVD risk.Keywords: cardiovascular diseases, diabetes, hypertension, stroke
Procedia PDF Downloads 381547 Improvement of the Geometric of Dental Bridge Framework through Automatic Program
Authors: Rong-Yang Lai, Jia-Yu Wu, Chih-Han Chang, Yung-Chung Chen
Abstract:
The dental bridge is one of the clinical methods of the treatment for missing teeth. The dental bridge is generally designed for two layers, containing the inner layer of the framework(zirconia) and the outer layer of the porcelain-fused to framework restorations. The design of a conventional bridge is generally based on the antagonist tooth profile so that the framework evenly indented by an equal thickness from outer contour. All-ceramic dental bridge made of zirconia have well demonstrated remarkable potential to withstand a higher physiological occlusal load in posterior region, but it was found that there is still the risk of all-ceramic bridge failure in five years. Thus, how to reduce the incidence of failure is still a problem to be solved. Therefore, the objective of this study is to develop mechanical designs for all-ceramic dental bridges framework by reducing the stress and enhancing fracture resistance under given loading conditions by finite element method. In this study, dental design software is used to design dental bridge based on tooth CT images. After building model, Bi-directional Evolutionary Structural Optimization (BESO) Method algorithm implemented in finite element software was employed to analyze results of finite element software and determine the distribution of the materials in dental bridge; BESO searches the optimum distribution of two different materials, namely porcelain and zirconia. According to the previous calculation of the stress value of each element, when the element stress value is higher than the threshold value, the element would be replaced by the framework material; besides, the difference of maximum stress peak value is less than 0.1%, calculation is complete. After completing the design of dental bridge, the stress distribution of the whole structure is changed. BESO reduces the peak values of principle stress of 10% in outer-layer porcelain and avoids producing tensile stress failure.Keywords: dental bridge, finite element analysis, framework, automatic program
Procedia PDF Downloads 282546 A Protocol of Procedures and Interventions to Accelerate Post-Earthquake Reconstruction
Authors: Maria Angela Bedini, Fabio Bronzini
Abstract:
The Italian experiences, positive and negative, of the post-earthquake are conditioned by long times and structural bureaucratic constraints, also motivated by the attempt to contain mafia infiltration and corruption. The transition from the operational phase of the emergency to the planning phase of the reconstruction project is thus hampered by a series of inefficiencies and delays, incompatible with the need for rapid recovery of the territories in crisis. In fact, intervening in areas affected by seismic events means at the same time associating the reconstruction plan with an urban and territorial rehabilitation project based on strategies and tools in which prevention and safety play a leading role in the regeneration of territories in crisis and the return of the population. On the contrary, the earthquakes that took place in Italy have instead further deprived the territories affected of the minimum requirements for habitability, in terms of accessibility and services, accentuating the depopulation process, already underway before the earthquake. The objective of this work is to address with implementing and programmatic tools the procedures and strategies to be put in place, today and in the future, in Italy and abroad, to face the challenge of the reconstruction of activities, sociality, services, risk mitigation: a protocol of operational intentions and firm points, open to a continuous updating and implementation. The methodology followed is that of the comparison in a synthetic form between the different Italian experiences of the post-earthquake, based on facts and not on intentions, to highlight elements of excellence or, on the contrary, damage. The main results obtained can be summarized in technical comparison cards on good and bad practices. With this comparison, we intend to make a concrete contribution to the reconstruction process, certainly not only related to the reconstruction of buildings but privileging the primary social and economic needs. In this context, the recent instrument applied in Italy of the strategic urban and territorial SUM (Minimal Urban Structure) and the strategic monitoring process become dynamic tools for supporting reconstruction. The conclusions establish, by points, a protocol of interventions, the priorities for integrated socio-economic strategies, multisectoral and multicultural, and highlight the innovative aspects of 'inversion' of priorities in the reconstruction process, favoring the take-off of 'accelerator' interventions social and economic and a more updated system of coexistence with risks. In this perspective, reconstruction as a necessary response to the calamitous event can and must become a unique opportunity to raise the level of protection from risks and rehabilitation and development of the most fragile places in Italy and abroad.Keywords: an operational protocol for reconstruction, operational priorities for coexistence with seismic risk, social and economic interventions accelerators of building reconstruction, the difficult post-earthquake reconstruction in Italy
Procedia PDF Downloads 127545 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration
Authors: In-Hwan Baek
Abstract:
Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.Keywords: dose-dependent, modeling, pharmacokinetics, valsartan
Procedia PDF Downloads 297544 Algal/Bacterial Membrane Bioreactor for Bioremediation of Chemical Industrial Wastewater Containing 1,4 Dioxane
Authors: Ahmed Tawfik
Abstract:
Oxidation of 1,4 dioxane produces metabolites by-products involving glycolaldehyde and acids that have geno- and cytotoxicity impact on microbial degradation. Thereby, the incorporation of algae with bacteria in the treatment system would eliminate and overcome the accumulation of metabolites that are utilized as a carbon source for the build-up of biomass. Therefore, the aim of the present study is to assess the potential of algae/bacteria-based membrane bioreactor (AB-MBR) for biodegradation of 1,4 dioxane-rich wastewater at a high imposed loading rate. Three identical reactors, i.e., AB-MBR1, AB-MBR2, and AB-MBR3, were operated in parallel at 1,4 dioxane loading rates of 641.7, 320.9, and 160.4 mg/L. d., and HRTs of 6.0, 12 and 24 h. respectively. The AB-MBR1 achieved 1,4 dioxane removal rate of 263.7 mg/L.d., where the residual value in the treated effluent amounted to 94.4±22.9 mg/L. Reducing the 1,4 dioxane loading rate (LR) to 320.9 mg/L.d in the AB-MBR2 maximized the removal rate efficiency of 265.9 mg/L.d., with a removal efficiency of 82.8±3.2%. The minimum value of 1,4 dioxane of 17.3±1.8 mg/L in the treated effluent of AB-MBR3 was obtained at an HRT of 24.0 h and loading rate of 160.4 mg/L.d. The mechanism of 1,4 dioxane degradation in AB-MBR was a combination of volatilization (8.03±0.6%), UV oxidation (14.1±0.9%), microbial biodegradation (49.1±3.9%) and absorption/uptake and assimilation by algae (28.8±2.%). Further, the Thioclava, Afipia, and Mycobacterium genera oxidized and produced the required enzymes for hydrolysis and cleavage of the dioxane ring into 2-hydroxy-1,4 dioxane. Moreover, the fungi, i.e., Basidiomycota and Cryptomycota, played a big role in the degradation of the 1,4 dioxane into 2-hydroxy-1,4 dioxane. Xanthobacter and Mesorhizobium were involved in the metabolism process by secreting alcohol dehydrogenase (ADH), aldehyde dehydrogenase (ALDH), and glycolate oxidase. Bacteria and fungi produced dehydrogenase (DH) for the transformation of 2-hydroxy-1,4 dioxane into 2-hydroxy-ethoxyacetaldehyde. The latter is converted into Ethylene glycol by Aldehyde hydrogenase (ALDH). Ethylene glycol is oxidized into acids using Alcohol hydrogenase (ADH). The Diatomea, Chlorophyta, and Streptophyta utilize the metabolites for biomass assimilation and produce the required oxygen for further oxidation of the dioxane and its metabolites by-products of bacteria and fungi. The major portion of metabolites (ethylene glycol, glycolic acid, and oxalic acid were removed due to uptake and absorption by algae (43±4.3%), followed by adsorption (18.4±0.9%). The volatilization and UV oxidation contribution for the degradation of metabolites were 8.7±0.7% and 12.3±0.8%, respectively. The capabilities of genera Defluviimonas, Thioclava, Luteolibacter, and Afipia. The genera of Defluviimonas, Thioclava, Luteolibacter, and Mycobacterium were grown under a high 1,4 dioxane LR of 641.7 mg/L.d. The Chlorophyta (4.1-43.6%), Streptophyta (2.5-21.7%), and Diatomea (0.8-1.4%) phyla were dominant for degradation of 1,4 dioxane. The results of this study strongly demonstrated that the bioremediation and bioaugmentation process can safely remove 1,4 dioxane from industrial wastewater while minimizing environmental concerns and reducing economic costs.Keywords: wastewater, membrane bioreactor, bacterial community, algal community
Procedia PDF Downloads 44