Search results for: the important architectural complex of Wang-An Hua-Zhai settlement
3746 Physicochemical Characterization of Asphalt Ridge Froth Bitumen
Authors: Nader Nciri, Suil Song, Namho Kim, Namjun Cho
Abstract:
Properties and compositions of bitumen and bitumen-derived liquids have significant influences on the selection of recovery, upgrading and refining processes. Optimal process conditions can often be directly related to these properties. The end uses of bitumen and bitumen products are thus related to their compositions. Because it is not possible to conduct a complete analysis of the molecular structure of bitumen, characterization must be made in other terms. The present paper focuses on physico-chemical analysis of two different types of bitumens. These bitumen samples were chosen based on: the original crude oil (sand oil and crude petroleum), and mode of process. The aim of this study is to determine both the manufacturing effect on chemical species and the chemical organization as a function of the type of bitumen sample. In order to obtain information on bitumen chemistry, elemental analysis (C, H, N, S, and O), heavy metal (Ni, V) concentrations, IATROSCAN chromatography (thin layer chromatography-flame ionization detection), FTIR spectroscopy, and 1H NMR spectroscopy have all been used. The characterization includes information about the major compound types (saturates, aromatics, resins and asphaltenes) which can be compared with similar data for other bitumens, more importantly, can be correlated with data from petroleum samples for which refining characteristics are known. Examination of Asphalt Ridge froth bitumen showed that it differed significantly from representative petroleum pitches, principally in their nonhydrocarbon content, heavy metal content and aromatic compounds. When possible, properties and composition were related to recovery and refining processes. This information is important because of the effects that composition has on recovery and processing reactions.Keywords: froth bitumen, oil sand, asphalt ridge, petroleum pitch, thin layer chromatography-flame ionization detection, infrared spectroscopy, 1H nuclear magnetic resonance spectroscopy
Procedia PDF Downloads 4313745 Testing for Endogeneity of Foreign Direct Investment: Implications for Economic Policy
Authors: Liwiusz Wojciechowski
Abstract:
Research background: The current knowledge does not give a clear answer to the question of the impact of FDI on productivity. Results of the empirical studies are still inconclusive, no matter how extensive and diverse in terms of research approaches or groups of countries analyzed they are. It should also take into account the possibility that FDI and productivity are linked and that there is a bidirectional relationship between them. This issue is particularly important because on one hand FDI can contribute to changes in productivity in the host country, but on the other hand its level and dynamics may imply that FDI should be undertaken in a given country. As already mentioned, a two-way relationship between the presence of foreign capital and productivity in the host country should be assumed, taking into consideration the endogenous nature of FDI. Purpose of the article: The overall objective of this study is to determine the causality between foreign direct investment and total factor productivity in host county in terms of different relative absorptive capacity across countries. In the classic sense causality among variables is not always obvious and requires for testing, which would facilitate proper specification of FDI models. The aim of this article is to study endogeneity of selected macroeconomic variables commonly being used in FDI models in case of Visegrad countries: main recipients of FDI in CEE. The findings may be helpful in determining the structure of the actual relationship between variables, in appropriate models estimation and in forecasting as well as economic policymaking. Methodology/methods: Panel and time-series data techniques including GMM estimator, VEC models and causality tests were utilized in this study. Findings & Value added: The obtained results allow to confirm the hypothesis states the bi-directional causality between FDI and total factor productivity. Although results differ from among countries and data level of aggregation implications may be useful for policymakers in case of providing foreign capital attracting policy.Keywords: endogeneity, foreign direct investment, multi-equation models, total factor productivity
Procedia PDF Downloads 2013744 Formulation and Evaluation of Piroxicam Hydrotropic Starch Gel
Authors: Mohammed Ghazwani, Shyma Ali Alshahrani, Zahra Abdu Yousef, Taif Torki Asiri, Ghofran Abdur Rahman, Asma Ali Alshahrani, Umme Hani
Abstract:
Background and introduction: Piroxicam is a nonsteroidal anti-inflammatory drug characterized by low solubility-high permeability used to reduce pain, swelling, and joint stiffness from arthritis. Hydrotropes are a class of compounds that normally increase the aqueous solubility of insoluble solutes. Aim: The objective of the present research study was to formulate and optimize Piroxicam hydrotropic starch gel using sodium salicylate, sodium benzoate as hydrotropic salts, and potato starch for topical application. Materials and methods: The prepared Piroxicam hydrotropic starch gel was characterized for various physicochemical parameters like drug content estimation, pH, tube extrudability, and spreadability; all the prepared formulations were subjected to in-vitro diffusion studies for six hours in 100 ml phosphate buffer (pH 7.4) and determined gel strength. Results: All formulations were found to be white opaque in appearance and have good homogeneity. The pH of formulations was found to be between 6.9-7.9. Drug content ranged from 96.8%-99.4.5%. Spreadability plays an important role in patient compliance and helps in the uniform application of gel to the skin as gels should spread easily; F4 showed a spreadability of 2.4cm highest among all other formulations. In in vitro diffusion studies, extrudability and gel strength were good with F4 in comparison with other formulations; hence F4 was selected as the optimized formulation. Conclusion: Isolated potato starch was successfully employed to prepare the gel. Hydrotropic salt sodium salicylate increased the solubility of Piroxicam and resulted in a stable gel, whereas the gel prepared using sodium benzoate changed its color after one week of preparation from white to light yellowish. Hydrotropic potato starch gel proposed a suitable vehicle for the topical delivery of Piroxicam.Keywords: Piroxicam, potato starch, hydrotropic salts, hydrotropic starch gel
Procedia PDF Downloads 1473743 Numerical Investigation into Capture Efficiency of Fibrous Filters
Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard
Abstract:
Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory
Procedia PDF Downloads 2133742 Extending Theory of Planned Behavior to Modelling Chronic Patients’ Acceptance of Health Information: An Information Overload Perspective
Authors: Shu-Lien Chou, Chung-Feng Liu
Abstract:
Self-health management of chronic illnesses plays an important part in chronic illness treatments. However, various kinds of health information (health education materials) which government or healthcare institutions provide for patients may not achieve the expected outcome. One of the critical reasons affecting patients’ use intention could be patients’ perceived Information overload regarding the health information. This study proposed an extended model of Theory of Planned Behavior, which integrating perceived information overload as another construct to explore patients’ use intention of the health information for self-health management. The independent variables are attitude, subject norm, perceived behavior control and perceived information overload while the dependent variable is behavior intention to use the health information. The cross-sectional study used a structured questionnaire for data collection, focusing on the chronic patients with coronary artery disease (CAD), who are the potential users of the health information, in a medical center in Taiwan. Data were analyzed using descriptive statistics of the basic information distribution of the questionnaire respondents, and the Partial Least Squares (PLS) structural equation model to study the reliability and construct validity for testing our hypotheses. A total of 110 patients were enrolled in this study and 106 valid questionnaires were collected. The PLS analysis result indicates that the patients’ perceived information overload of health information contributes the most critical factor influencing the behavioral intention. Subjective norm and perceived behavioral control of TPB constructs had significant effects on patients’ intentions to use health information also, whereas the attitude construct did not. This study demonstrated a comprehensive framework, which extending perceived information overload into TPB model to predict patients’ behavioral intention of using heath information. We expect that the results of this study will provide useful insights for studying health information from the perspectives of academia, governments, and healthcare providers.Keywords: chronic patients, health information, information overload, theory of planned behavior
Procedia PDF Downloads 4423741 Influence of Magnetic Field on Microstructure and Properties of Copper-Silver Composites
Authors: Engang Wang
Abstract:
The Cu-alloy composites are a kind of high-strength and high-conductivity Cu-based alloys, which have excellent mechanical and electrical properties and is widely used in electronic, electrical, machinery industrial fields. However, the solidification microstructure of the composites, such as the primary or second dendrite arm spacing, have important rule to its tensile strength and conductivity, and that is affected by its fabricating method. In this paper, two kinds of directional solidification methods; the exothermic powder method (EP method) and liquid metal cooling method (LMC method), were used to fabricate the Cu-alloy composites with applied different magnetic fields to investigate their influence on the solidifying microstructure of Cu-alloy, and further the fabricated Cu-alloy composites was drawn to wires to investigate the influence of fabricating method and magnetic fields on the drawing microstructure of fiber-reinforced Cu-alloy composites and its properties. The experiment of Cu-Ag alloy under directional solidification and horizontal magnetic fields with different processing parameters show that: 1) For the Cu-Ag alloy with EP method, the dendrite is directionally developed in the cooling copper mould and the solidifying microstructure is effectively refined by applying horizontal magnetic fields. 2) For the Cu-Ag alloy with LMC method, the primary dendrite arm spacing is decreased and the content of Ag in the dendrite increases as increasing the drawing velocity of solidification. 3) The dendrite is refined and the content of Ag in the dendrite increases as increasing the magnetic flux intensity; meanwhile, the growth direction of dendrite is also affected by magnetic field. The research results of Cu-Ag alloy in situ composites by drawing deforming process show that the micro-hardness of alloy is higher by decreasing dendrite arm spacing. When the dendrite growth orientation is consistent with the axial of the samples. the conductivity of the composites increases with the second dendrite arm spacing increases. However, its conductivity reduces with the applied magnetic fields owing to disrupting the dendrite growth orientation.Keywords: Cu-Ag composite, magnetic field, microstructure, solidification
Procedia PDF Downloads 2163740 Methodologies for Stability Assessment of Existing and Newly Designed Reinforced Concrete Bridges
Authors: Marija Vitanovа, Igor Gjorgjiev, Viktor Hristovski, Vlado Micov
Abstract:
Evaluation of stability is very important in the process of definition of optimal structural measures for maintenance of bridge structures and their strengthening. To define optimal measures for their repair and strengthening, it is necessary to evaluate their static and seismic stability. Presented in this paper are methodologies for evaluation of the seismic stability of existing reinforced concrete bridges designed without consideration of seismic effects and checking of structural justification of newly designed bridge structures. All bridges are located in the territory of the Republic of North Macedonia. A total of 26 existing bridges of different structural systems have been analyzed. Visual inspection has been carried out for all bridges, along with the definition of three main damage categories according to which structures have been categorized in respect to the need for their repair and strengthening. Investigations involving testing the quality of the built-in materials have been carried out, and dynamic tests pointing to the dynamic characteristics of the structures have been conducted by use of non-destructive methods of ambient vibration measurements. The conclusions drawn from the performed measurements and tests have been used for the development of accurate mathematical models that have been analyzed for static and dynamic loads. Based on the geometrical characteristics of the cross-sections and the physical characteristics of the built-in materials, interaction diagrams have been constructed. These diagrams along with the obtained section quantities under seismic effects, have been used to obtain the bearing capacity of the cross-sections. The results obtained from the conducted analyses point to the need for the repair of certain structural parts of the bridge structures. They indicate that the stability of the superstructure elements is not critical during a seismic effect, unlike the elements of the sub-structure, whose strengthening is necessary.Keywords: existing bridges, newly designed bridges, reinforced concrete bridges, stability assessment
Procedia PDF Downloads 1043739 Dietary Intake and the Risk of Hypertriglyceridemia in Adults: Tehran Lipid and Glucose Study
Authors: Parvin Mirmiran, Zahra Bahadoran, Sahar Mirzae, Fereidoun Azizi
Abstract:
Background and aim: Lifestyle factors, especially dietary intakes play an important role in metabolism of lipids and lipoproteins. In this study, we assessed the association between dietary factors and 3-year changes of serum triglycerides (TG), HDL-C and the atherogenic index of plasma among Iranian adults. This longitudinal study was conducted on 1938 subjects, aged 19-70 years, who participated in the Tehran Lipid and Glucose Study. Demographics, anthropometrics and biochemical measurements including serum TG were assessed at baseline (2006-2008) and after a 3-year follow-up (2009-2011). Dietary data were collected by using a 168-food item, validated semi-quantitative food frequency questionnaire at baseline. The risk of hypertriglyceridemia in the quartiles of dietary factors was evaluated using logistic regression models with adjustment for age, gender, body mass index, smoking, physical activity and energy intakes. Results: Mean age of the participants at baseline was 41.0±13.0 y. Mean TG and HDL-C at baseline was 143±86 and 42.2±10.0 mg/dl, respectively. Three-year change of serum TG were inversely related energy intake from phytochemical rich foods, whole grains, and legumes (P<0.05). Higher intakes compared to lower ones of dietary fiber and phytochemical-rich foods had similar impact on decreased risk of hyper-triglyceridemia (OR=0.58, 95% CI=0.34-1.00). Higher- compared to lower-dietary sodium to potassium ratios (Na/K ratio) increased the risk of hypertriglyceridemia by 63% (OR=0.1.63, 95% CI= 0.34-1.00). Conclusion: Findings showed that higher intakes of fiber and phytochemical rich foods especially whole grain and legumes could have protective effects against lipid disorders; in contrast higher sodium to potassium ratio had undesirable effect on triglycerides.Keywords: lipid disorders, hypertriglyceridemia, diet, food science
Procedia PDF Downloads 4713738 Products in Early Development Phases: Ecological Classification and Evaluation Using an Interval Arithmetic Based Calculation Approach
Authors: Helen L. Hein, Joachim Schwarte
Abstract:
As a pillar of sustainable development, ecology has become an important milestone in research community, especially due to global challenges like climate change. The ecological performance of products can be scientifically conducted with life cycle assessments. In the construction sector, significant amounts of CO2 emissions are assigned to the energy used for building heating purposes. Therefore, sustainable construction materials for insulating purposes are substantial, whereby aerogels have been explored intensively in the last years due to their low thermal conductivity. Therefore, the WALL-ACE project aims to develop an aerogel-based thermal insulating plaster that would achieve minor thermal conductivities. But as in the early stage of development phases, a lot of information is still missing or not yet accessible, the ecological performance of innovative products bases increasingly on uncertain data that can lead to significant deviations in the results. To be able to predict realistically how meaningful the results are and how viable the developed products may be with regard to their corresponding respective market, these deviations however have to be considered. Therefore, a classification method is presented in this study, which may allow comparing the ecological performance of modern products with already established and competitive materials. In order to achieve this, an alternative calculation method was used that allows computing with lower and upper bounds to consider all possible values without precise data. The life cycle analysis of the considered products was conducted with an interval arithmetic based calculation method. The results lead to the conclusion that the interval solutions describing the possible environmental impacts are so wide that the result usability is limited. Nevertheless, a further optimization in reducing environmental impacts of aerogels seems to be needed to become more competitive in the future.Keywords: aerogel-based, insulating material, early development phase, interval arithmetic
Procedia PDF Downloads 1463737 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 1803736 Support for Reporting Guidelines in Surgical Journals Needs Improvement: A Systematic Review
Authors: Riaz A. Agha, Ishani Barai, Shivanchan Rajmohan, Seon Lee, Mohammed O. Anwar, Alex J. Fowler, Dennis P. Orgill, Douglas G. Altman
Abstract:
Introduction: Medical knowledge is growing fast. Evidence-based medicine works best if the evidence is reported well. Past studies have shown reporting quality to be lacking in the field of surgery. Reporting guidelines are an important tool for authors to optimize the reporting of their research. The objective of this study was to analyse the frequency and strength of recommendation for such reporting guidelines within surgical journals. Methods: A systematic review of the 198 journals within the Journal Citation Report 2014 (surgery category) published by Thomson Reuters was undertaken. The online guide for authors for each journal was screened by two independent groups and results were compared. Data regarding the presence and strength of recommendation to use reporting guidelines was extracted. Results: 193 journals were included (as five appeared twice having changed their name). These had a median impact factor of 1.526 (range 0.047 to 8.327), with a median of 145 articles published per journal (range 29-659), with 34,036 articles published in total over the two-year window 2012-2013. The majority (62%) of surgical journals made no mention of reporting guidelines within their guidelines for authors. Of the journals (38%) that did mention them, only 14% (10/73) required the use of all relevant reporting guidelines. The most frequently mentioned reporting guideline was CONSORT (46 journals). Conclusion: The mention of reporting guidelines within the guide for authors of surgical journals needs improvement. Authors, reviewers and editors should work to ensure that research is reported in line with the relevant reporting guidelines. Journals should consider hard-wiring adherence to them. This will allow peer-reviewers to focus on what is present, not what is missing, raising the level of scholarly discourse between authors and the scientific community and reducing frustration amongst readers.Keywords: CONSORT, guide for authors, PRISMA, reporting guidelines, journal impact factor, citation analysis
Procedia PDF Downloads 4663735 Synthesis and Properties of Chitosan-Graft-Polyacrylamide/Gelatin Superabsorbent Composites for Wastewater Purification
Authors: Hafida Ferfera-Harrar, Nacera Aiouaz, Nassima Dairi
Abstract:
Super absorbents polymers received much attention and are used in many fields because of their superior characters to traditional absorbents, e.g., sponge and cotton. So, it is very important but challenging to prepare highly and fast-swelling super absorbents. A reliable, efficient and low-cost technique for removing heavy metal ions from waste water is the adsorption using bio-adsorbents obtained from biological materials, such as polysaccharides-based hydrogels super absorbents. In this study, novel multi-functional super absorbent composites type semi-interpenetrating polymer networks (Semi-IPNs) were prepared via graft polymerization of acrylamide onto chitosan backbone in presence of gelatin, CTS-g-PAAm/Ge, using potassium persulfate and N,N’ -methylenebisacrylamide as initiator and cross linker, respectively. These hydrogels were also partially hydrolyzed to achieve superabsorbents with ampholytic properties and uppermost swelling capacity. The formation of the grafted network was evidenced by Fourier Transform Infrared Spectroscopy (ATR-FTIR) and thermo gravimetric Analysis (TGA). The porous structures were observed by Scanning Electron Microscope (SEM). From TGA analysis, it was concluded that the incorporation of the Ge in the CTS-g-PAAm network has marginally affected its thermal stability. The effect of gelatin content on the swelling capacities of these super absorbent composites was examined in various media (distilled water, saline and pH-solutions).The water absorbency was enhanced by adding Ge in the network, where the optimum value was reached at 2 wt. % of Ge. Their hydrolysis has not only greatly optimized their absorption capacity but also improved the swelling kinetic. These materials have also showed reswelling ability. We believe that these super-absorbing materials would be very effective for the adsorption of harmful metal ions from waste water.Keywords: chitosan, gelatin, superabsorbent, water absorbency
Procedia PDF Downloads 4713734 Security of Database Using Chaotic Systems
Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem
Abstract:
Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST
Procedia PDF Downloads 2673733 Descriptive Analysis of the Relationship between State and Civil Society in Hegel's Political Thought
Authors: Garineh Keshishyan Siraki
Abstract:
Civil society is one of the most important concepts of the twentieth century and even so far. Modern and postmodern thinkers have provided different definitions of civil society. Of course, the concept of civil society has undergone many changes over time. The relationship between government and civil society is one of the relationships that attracted the attention of many contemporary thinkers. Hegel, the thinker we discussed in this article also explores the relationship between these concepts and emphasizing the dialectical method, he has drawn three lines between family, state, and civil society. In Hegel's view, the creation of civil society will lead to a reduction of social conflict and increased social cohesion. The importance of the issue is due to the study of social cohesion and the ways to increase it. The importance of the issue is due to the study of social cohesion and the ways to increase it. This paper, which uses a descriptive-analytic method to examine Hegel's dialectical theory of civil society, after examining the relationship between the family and the state and finding the concept of civil society as the interface and the interconnected circle of these two, investigates tripartite economic, legal, and pluralistic systems. In this article, after examining the concepts of the market, the right and duty, the individual interests and the development of the exchange economy, Hegel's view is to examine the concept of freedom and its relation with civil society. The results of this survey show that, in Hegel's thought, the separation between the political system and the social system is a natural and necessary thing. In Hegel's view, because of those who are in society, they have selfish features; the community is in tension and contradiction. Therefore, the social realms within which conflicts emerge must be identified and controlled by specific mechanisms. It can also be concluded that the government can act to reduce social conflicts by legislating, using force or forming trade unions. The bottom line is that Hegel wants to reconcile between the individual, the state and civil society and it is not possible to rely on ethics.Keywords: civil society, cohesion system, economic system, family, the legal system, state
Procedia PDF Downloads 2043732 Day-To-Day Variations in Health Behaviors and Daily Functioning: Two Intensive Longitudinal Studies
Authors: Lavinia Flueckiger, Roselind Lieb, Andrea H. Meyer, Cornelia Witthauer, Jutta Mata
Abstract:
Objective: Health behaviors tend to show a high variability over time within the same person. However, most existing research can only assess a snapshot of a person’s behavior and not capture this natural daily variability. Two intensive longitudinal studies examine the variability in health behavior over one academic year and their implications for other aspects of daily life such as affect and academic performance. Can already a single day of increased physical activity, snacking, or improved sleep have beneficial effects? Methods: In two intensive longitudinal studies with up to 65 assessment days over an entire academic year, university students (Study 1: N = 292; Study 2: N = 304) reported sleep quality, physical activity, snacking, positive and negative affect, and learning goal achievement. Results: Multilevel structural equation models showed that on days on which participants reported better sleep quality or more physical activity than usual, they also reported increased positive affect, decreased negative affect, and better learning goal achievement. Higher day-to-day snacking was only associated with increased positive affect. Both, increased day-to-day sleep quality and physical activity were indirectly associated with better learning goal achievement through changes in positive and negative affect; results for snacking were mixed. Importantly, day-to-day sleep quality was a stronger predictor for affect and learning goal achievement than physical activity or snacking. Conclusion: One day of better sleep or more physical activity than usual is associated with improved affect and academic performance. These findings have important implications for low-threshold interventions targeting the improvement of daily functioning.Keywords: sleep quality, physical activity, snacking, affect, academic performance, multilevel structural equation model
Procedia PDF Downloads 5813731 Development and Implementation of a Business Technology Program Based on Techniques for Reusing Water in a Colombian Company
Authors: Miguel A. Jimenez Barros, Elyn L. Solano Charris, Luis E. Ramirez, Lauren Castro Bolano, Carlos Torres Barreto, Juliana Morales Cubillo
Abstract:
This project sought to mitigate the high levels of water consumption in industrial processes in accordance with the water-rationing plan promoted at national and international level due to the water consumption projections published by the United Nations. Water consumption has three main uses, municipal (common use), agricultural and industrial where the latter consumes a minimum percentage (around 20% of the total consumption). Awareness on world water scarcity, a Colombian company responsible for generation of massive consumption products, decided to implement politics and techniques for water treatment, recycling, and reuse. The project consisted in a business technology program that permits a better use of wastewater caused by production operations. This approach reduces the potable water consumption, generates better conditions of water in the sewage dumps, generates a positive environmental impact for the region, and is a reference model in national and international levels. In order to achieve the objective, a process flow diagram was used in order to define the industrial processes that required potable water. This strategy allowed the industry to determine a water reuse plan at the operational level without affecting the requirements associated with the manufacturing process and even more, to support the activities developed in administrative buildings. Afterwards, the company made an evaluation and selection of the chemical and biological processes required for water reuse, in compliance with the Colombian Law. The implementation of the business technology program optimized the water use and recirculation rate up to 70%, accomplishing an important reduction of the regional environmental impact.Keywords: bio-reactor, potable water, reverse osmosis, water treatment
Procedia PDF Downloads 2393730 Normalized P-Laplacian: From Stochastic Game to Image Processing
Authors: Abderrahim Elmoataz
Abstract:
More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems
Procedia PDF Downloads 5153729 Effect of Carbon Nanotubes on Ultraviolet and Immersion Stability of Diglycidyl Ether of Bisphenol A Epoxy Coating
Authors: Artemova Anastasiia, Shen Zexiang, Savilov Serguei
Abstract:
The marine environment is very aggressive for a number of factors, such as moisture, temperature, winds, ultraviolet radiation, chloride ion concentration, oxygen concentration, pollution, and biofouling, all contributing to marine corrosion. Protective organic coatings provide protection either by a barrier action from the layer, which is limited due to permeability to water and oxygen or from active corrosion inhibition and cathodic protection due to the pigments in the coating. Carbon nanotubes can play not only barrier effect but also passivation effect via adsorbing molecular species of oxygen, hydroxyl, chloride and sulphate anions. Multiwall carbon nanotubes composite provide very important properties such as mechanical strength, non-cytotoxicity, outstanding thermal and electrical conductivity, and very strong absorption of ultraviolet radiation. The samples of stainless steel (316L) coated by epoxy resin with carbon nanotubes-based pigments were exposed to UV irradiation (340nm), and immersion to the sodium chloride solution for 1000h and corrosion behavior in 3.5 wt% sodium chloride (NaCl) solution was investigated. Experimental results showed that corrosion current significantly decreased in the presence of carbon nanotube-based materials, especially nitrogen-doped ones, in the composite coating. Importance of the structure and composition of the pigment materials and its composition was established, and the mechanism of the protection was described. Finally, the effect of nitrogen doping on the corrosion behavior was investigated. The pigment-polymer crosslinking improves the coating performance and the corrosion rate decreases in comparison with pure epoxy coating from 5.7E-05 to 1.4E-05mm/yr for the coating without any degradation; in more than 6 times for the coating after ultraviolet degradation; and more than 16% for the coatings after immersion degradation.Keywords: corrosion, coating, carbon nanotubes, degradation
Procedia PDF Downloads 1643728 Energy Efficient Autonomous Lower Limb Exoskeleton for Human Motion Enhancement
Authors: Nazim Mir-Nasiri, Hudyjaya Siswoyo Jo
Abstract:
The paper describes conceptual design, control strategies, and partial simulation for a new fully autonomous lower limb wearable exoskeleton system for human motion enhancement that can support its weight and increase strength and endurance. Various problems still remain to be solved where the most important is the creation of a power and cost efficient system that will allow an exoskeleton to operate for extended period without batteries being frequently recharged. The designed exoskeleton is enabling to decouple the weight/mass carrying function of the system from the forward motion function which reduces the power and size of propulsion motors and thus the overall weight, cost of the system. The decoupling takes place by blocking the motion at knee joint by placing passive air cylinder across the joint. The cylinder is actuated when the knee angle has reached the minimum allowed value to bend. The value of the minimum bending angle depends on usual walk style of the subject. The mechanism of the exoskeleton features a seat to rest the subject’s body weight at the moment of blocking the knee joint motion. The mechanical structure of each leg has six degrees of freedom: four at the hip, one at the knee, and one at the ankle. Exoskeleton legs are attached to subject legs by using flexible cuffs. The operation of all actuators depends on the amount of pressure felt by the feet pressure sensors and knee angle sensor. The sensor readings depend on actual posture of the subject and can be classified in three distinct cases: subject stands on one leg, subject stands still on both legs and subject stands on both legs but transit its weight from one leg to other. This exoskeleton is power efficient because electrical motors are smaller in size and did not participate in supporting the weight like in all other existing exoskeleton designs.Keywords: energy efficient system, exoskeleton, motion enhancement, robotics
Procedia PDF Downloads 3763727 Indigenous Farming Strategies Used by Smallholder Farmers to Adapt to Climate Extremes in Limpopo Province, South Africa
Authors: Zongho Kom, Nthaduleni S. Nethengwe
Abstract:
From generations to generations, traditional farming systems have supported indigenous populations across the globe. In rural areas where communities primarily depend on farming activity for their livelihood, traditional farming has long been the main source of livelihood. This study seeks to examine the traditional agricultural techniques and agrarian resilient methods to adapt to climate change employed by smallholder farmers in Limpopo Province. The study used participatory appraisal for primary data gathering and was further complemented by qualitative and quantitative techniques. Data for this study were gathered through focus groups, questionnaires, and semi-structured interviews. A sample of 300 Indigenous farmers was chosen for the three survey sites. The results showed that to increase household food security and adapt to climate change, local farmers rely on using indigenous farming techniques. The findings indicated that traditional practices of crop productivity can be increased and climate change stock effects lessened by intercropping, crop rotation, cover crops, traditional organic composting, integrated crop-animal farming, indigenous ploughing, and rainwater harvesting. The significance of these findings lies in their potential to guide the development and execution of suitable resilient initiatives that adapt to shifts in rural landscapes, agriculture, and management. The study illustrates how important indigenous knowledge is to rural communities' agricultural industries. Additionally, it offers proof that indigenous knowledge should be viewed as a cooperative idea that supports resilience tactics and nature-based solutions that have helped rural communities endure.Keywords: traditional farming strategies, limpopo province, climate change, smallholder farmer, rainwater harvesting
Procedia PDF Downloads 163726 Micropropagation and in vitro Conservation via Slow Growth Techniques of Prunus webbii (Spach) Vierh: An Endangered Plant Species in Albania
Authors: Valbona Sota, Efigjeni Kongjika
Abstract:
Wild almond is a woody species, which is difficult to propagate either generatively by seed or by vegetative methods (grafting or cuttings) and also considered as Endangered (EN) in Albania based on IUCN criteria. As a wild relative of cultivated fruit trees, this species represents a source of genetic variability and can be very important in breeding programs and cultivation. For this reason, it would be of interest to use an effective method of in vitro mid-term conservation, which involves strategies to slow plant growth through physicochemical alterations of in vitro growth conditions. Multiplication of wild almond was carried out using zygotic embryos, as primary explants, with the purpose to develop a successful propagation protocol. Results showed that zygotic embryos can proliferate through direct or indirect organogenesis. During subculture, stage was obtained a great number of new plantlets identical to mother plants derived from the zygotic embryos. All in vitro plantlets obtained from subcultures underwent in vitro conservation by minimal growth in low temperature (4ºC) and darkness. The efficiency of this technique was evaluated for 3, 6, and 10 months of conservation period. Maintenance in these conditions reduced micro cuttings growth. Survival and regeneration rates for each period were evaluated and resulted that the maximal time of conservation without subculture on 4ºC was 10 months, but survival and regeneration rates were significantly reduced, specifically 15.6% and 7.6%. An optimal period of conservation in these conditions can be considered the 5-6 months storage, which can lead to 60-50% of survival and regeneration rates. This protocol may be beneficial for mass propagation, mid-term conservation, and for genetic manipulation of wild almond.Keywords: micropropagation, minimal growth, storage, wild almond
Procedia PDF Downloads 1333725 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction
Authors: Pontus Backstrom
Abstract:
In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling
Procedia PDF Downloads 1383724 Analysis and Design of Inductive Power Transfer Systems for Automotive Battery Charging Applications
Authors: Wahab Ali Shah, Junjia He
Abstract:
Transferring electrical power without any wiring has been a dream since late 19th century. There were some advances in this area as to know more about microwave systems. However, this subject has recently become very attractive due to their practiScal systems. There are low power applications such as charging the batteries of contactless tooth brushes or implanted devices, and higher power applications such as charging the batteries of electrical automobiles or buses. In the first group of applications operating frequencies are in microwave range while the frequency is lower in high power applications. In the latter, the concept is also called inductive power transfer. The aim of the paper is to have an overview of the inductive power transfer for electrical vehicles with a special concentration on coil design and power converter simulation for static charging. Coil design is very important for an efficient and safe power transfer. Coil design is one of the most critical tasks. Power converters are used in both side of the system. The converter on the primary side is used to generate a high frequency voltage to excite the primary coil. The purpose of the converter in the secondary is to rectify the voltage transferred from the primary to charge the battery. In this paper, an inductive power transfer system is studied. Inductive power transfer is a promising technology with several possible applications. Operation principles of these systems are explained, and components of the system are described. Finally, a single phase 2 kW system was simulated and results were presented. The work presented in this paper is just an introduction to the concept. A reformed compensation network based on traditional inductor-capacitor-inductor (LCL) topology is proposed to realize robust reaction to large coupling variation that is common in dynamic wireless charging application. In the future, this type compensation should be studied. Also, comparison of different compensation topologies should be done for the same power level.Keywords: coil design, contactless charging, electrical automobiles, inductive power transfer, operating frequency
Procedia PDF Downloads 2513723 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 4763722 Experimental and Theoretical Studies: Biochemical Properties of Honey on Type 2 Diabetes
Authors: Said Ghalem
Abstract:
Honey is primarily composed of sugars: glucose and fructose. Depending honey, it's either fructose or glucose predominates. More the fructose concentration and the less the glycemic index (GI) is high. Thus, changes in the insulin response shows a decrease of the amount of insulin secreted at an increased fructose honey. Honey is also a compound that can reduce the lipid in the blood. Several studies on animals, but which remain to be checked in humans, have shown that the honey can have interesting effects when combined with other molecules: associated with Metformin (a medicine taken by diabetics), it shows the benefits and effects of diabetes preserves the tissue; associated ginger, it increases the antioxidant activity and thus avoids neurologic complications, neuropathic. Molecular modeling techniques are widely used in chemistry, biology, and the pharmaceutical industry. Most of the currently existing drugs target enzymes. Inhibition of DPP-4 is an important approach in the treatment of type 2 diabetes. We have chosen for the inhibition of DPP-4 the following molecules: Linagliptin (BI1356), Sitagliptin (Januvia), Vildagliptin, Saxagliptin, Alogliptin, and Metformin (Glucophage), that are involved in the disease management of type 2 diabetes and added to honey. For this, we used software Molecular Operating Environment. A Wistar rat study was initiated in our laboratory with a well-studied protocol; after sacrifice, according to international standards and respect for the animal This theoretical approach predicts the mode of interaction of a ligand with its target. The honey can have interesting effects when combined with other molecules, it shows the benefits and effects of honey preserves the tissue, it increases the antioxidant activity, and thus avoids neurologic complications, neuropathic or macrovascular. The organs, especially the kidneys of Wistar, shows that the parameters to renal function let us conclude that damages caused by diabetes are slightly perceptible than those observed without the addition of a high concentration of fructose honey.Keywords: honey, molecular modeling, DPP4 enzyme, metformin
Procedia PDF Downloads 1023721 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems
Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic
Abstract:
Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method
Procedia PDF Downloads 1363720 Experimental Investigation on Performance of Beam Column Frames with Column Kickers
Authors: Saiada Fuadi Fancy, Fahim Ahmed, Shofiq Ahmed, Raquib Ahsan
Abstract:
The worldwide use of reinforced concrete construction stems from the wide availability of reinforcing steel as well as concrete ingredients. However, concrete construction requires a certain level of technology, expertise, and workmanship, particularly, in the field during construction. As a supporting technology for a concrete column or wall construction, kicker is cast as part of the slab or foundation to provide a convenient starting point for a wall or column ensuring integrity at this important junction. For that reason, a comprehensive study was carried out here to investigate the behavior of reinforced concrete frame with different kicker parameters. To achieve this objective, six half-scale specimens of portal reinforced concrete frame with kickers and one portal frame without kicker were constructed according to common practice in the industry and subjected to cyclic incremental horizontal loading with sustained gravity load. In this study, the experimental data, obtained in four deflections controlled cycle, were used to evaluate the behavior of kickers. Load-displacement characteristics were obtained; maximum loads and deflections were measured and assessed. Finally, the test results of frames constructed with three different types of kicker thickness were compared with the kickerless frame. Similar crack patterns were observed for all the specimens. From this investigation, specimens with kicker thickness 3″ were shown better results than specimens with kicker thickness 1.5″, which was specified by maximum load, stiffness, initiation of first crack and residual displacement. Despite of better performance, it could not be firmly concluded that 4.5″ kicker thickness is the most appropriate one. Because, during the test of that specimen, separation of dial gauge was needed. Finally, comparing with kickerless specimen, it was observed that performance of kickerless specimen was relatively better than kicker specimens.Keywords: crack, cyclic, kicker, load-displacement
Procedia PDF Downloads 3233719 The Effect of Antibiotic Use on Blood Cultures: Implications for Future Policy
Authors: Avirup Chowdhury, Angus K. McFadyen, Linsey Batchelor
Abstract:
Blood cultures (BCs) are an important aspect of management of the septic patient, identifying the underlying pathogen and its antibiotic sensitivities. However, while the current literature outlines indications for initial BCs to be taken, there is little guidance for repeat sampling in the following 5-day period and little information on how antibiotic use can affect the usefulness of this investigation. A retrospective cohort study was conducted using inpatients who had undergone 2 or more BCs within 5 days between April 2016 and April 2017 at a 400-bed hospital in the west of Scotland and received antibiotic therapy between the first and second BCs. The data for BC sampling was collected from the electronic microbiology database, and cross-referenced with data from the hospital electronic prescribing system. Overall, 283 BCs were included in the study, taken from 92 patients (mean 3.08 cultures per patient, range 2-10). All 92 patients had initial BCs, of which 83 were positive (90%). 65 had a further sample within 24 hours of commencement of antibiotics, with 35 positive (54%). 23 had samples within 24-48 hours, with 4 (17%) positive; 12 patients had sampling at 48-72 hours, 12 at 72-96 hours, and 10 at 96-120 hours, with none positive. McNemar’s Exact Test was used to calculate statistical significance for patients who received blood cultures in multiple time blocks (Initial, < 24h, 24-120h, > 120h). For initial vs. < 24h-post BCs (53 patients tested), the proportion of positives fell from 46/53 to 29/53 (one-tailed P=0.002, OR 3.43, 95% CI 1.48-7.96). For initial vs 24-120h (n=42), the proportions were 38/42 and 4/42 respectively (P < 0.001, OR 35.0, 95% CI 4.79-255.48). For initial vs > 120h (n=36), these were 33/36 and 2/36 (P < 0.001,OR ∞). These were also calculated for a positive in initial or < 24h vs. 24-120h (n=42), with proportions of 41/42 and 4/42 (P < 0.001, OR 38.0, 95% CI 5.22-276.78); and for initial or < 24h vs > 120h (n=36), with proportions of 35/36 and 2/36 respectively (P < 0.001, OR ∞). This data appears to show that taking an initial BC followed by a BC within 24 hours of antibiotic commencement would maximise blood culture yield while minimising the risk of false negative results. This could potentially remove the need for as many as 46% of BC samples without adversely affecting patient care. BC yield decreases sharply after 48 hours of antibiotic use, and may not provide any clinically useful information after this time. Further multi-centre studies would validate these findings, and provide a foundation for future health policy generation.Keywords: antibiotics, blood culture, efficacy, inpatient
Procedia PDF Downloads 1773718 Study of Synergetic Effect by Combining Dielectric Barrier Discharge (DBD) Plasma and Photocatalysis for Abatement of Pollutants in Air Mixture System: Influence of Some Operating Conditions and Identification of Byproducts
Authors: Wala Abou Saoud, Aymen Amine Assadi, Monia Guiza, Abdelkrim Bouzaza, Wael Aboussaoud, Abdelmottaleb Ouederni, Dominique Wolbert
Abstract:
Volatile organic compounds (VOCs) constitute one of the most important families of chemicals involved in atmospheric pollution, causing damage to the environment and human health, and need, consequently, to be eliminated. Among the promising technologies, dielectric barrier discharge (DBD) plasma - photocatalysis coupling reveals very interesting prospects in terms of process synergy of compounds mineralization’s, with low energy consumption. In this study, the removal of organic compounds such butyraldehyde (BUTY) and dimethyl disulfide (DMDS) (exhaust gasses from animal quartering centers.) in air mixture using DBD plasma coupled with photocatalysis was tested, in order to determine whether or not synergy effect was present. The removal efficiency of these pollutants, a selectivity of CO₂ and CO, and byproducts formation such as ozone formation were investigated in order to evaluate the performance of the combined process. For this purpose, a series of experiments were carried out in a continuous reactor. Many operating parameters were also investigated such as the specific energy of discharge, the inlet concentration of pollutant and the flowrate. It appears from this study that, the performance of the process has enhanced and a synergetic effect is observed. In fact, we note an enhancement of 10 % on removal efficiency. It is interesting to note that the combined system leads to better CO₂ selectivity than for plasma. Consequently, intermediates by-products have been reduced due to various other species (O•, N, OH•, O₂•-, O₃, NO₂, NOx, etc.). Additionally, the behavior of combining DBD plasma and photocatalysis has shown that the ozone can be easily also decomposed in presence of photocatalyst.Keywords: combined process, DBD plasma, photocatalysis, pilot scale, synergetic effect, VOCs
Procedia PDF Downloads 3353717 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers
Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato
Abstract:
The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence
Procedia PDF Downloads 143