Search results for: window display
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1065

Search results for: window display

225 The Conflict Between the Current International Copyright Regime and the Islamic Social Justice Theory

Authors: Abdelrahman Mohamed

Abstract:

Copyright law is a branch of the Intellectual Property Law that gives authors exclusive rights to copy, display, perform, and distribute copyrightable works. In theory, copyright law aims to promote the welfare of society by granting exclusive rights to the creators in exchange for the works that these creators produce for society. Thus, there are two different types of rights that a just regime should balance between them which are owners' rights and users' rights. The paper argues that there is a conflict between the current international copyright regime and the Islamic Social Justice Theory. This regime is unjust from the Islamic Social Justice Theory's perspective regarding access to educational materials because this regime was unjustly established by the colonizers to protect their interests, starting from the Berne Convention for the Protection of Literary and Artistic Works 1886 and reaching to the Trade-Related Aspects of Intellectual Property Rights 1994. Consequently, the injustice of this regime was reflected in the regulations of these agreements and led to an imbalance between the owners' rights and the users' rights in favor of the former at the expense of the latter. As a result, copyright has become a barrier to access to knowledge and educational materials. The paper starts by illustrating the concept of justice in Islamic sources such as the Quran, Sunnah, and El-Maslha-Elmorsalah. Then, social justice is discussed by focusing on the importance of access to knowledge and the right to education. The theory assumes that the right to education and access to educational materials are necessities; thus, to achieve justice in this regime, the users' rights should be granted regardless of their region, color, and financial situation. Then, the paper discusses the history of authorship protection under the Islamic Sharia and to what extent this right was recognized even before the existence of copyright law. According to this theory, the authors' rights should be protected, however, this protection should not be at the expense of the human's rights to education and the right to access to educational materials. Moreover, the Islamic Social Justice Theory prohibits the concentration of wealth among a few numbers of people, 'the minority'. Thus, if knowledge is considered an asset or a good, the concentration of knowledge is prohibited from the Islamic perspective, which is the current situation of the copyright regime where a few countries control knowledge production and distribution. Finally, recommendations will be discussed to mitigate the injustice of the current international copyright regime and to fill the gap between the current international copyright regime and the Islamic Social Justice Theory.

Keywords: colonization, copyright, intellectual property, Islamic sharia, social justice

Procedia PDF Downloads 14
224 Removal of Heavy Metals from Municipal Wastewater Using Constructed Rhizofiltration System

Authors: Christine A. Odinga, G. Sanjay, M. Mathew, S. Gupta, F. M. Swalaha, F. A. O. Otieno, F. Bux

Abstract:

Wastewater discharged from municipal treatment plants contain an amalgamation of trace metals. The presence of metal pollutants in wastewater poses a huge challenge to the choice and applications of the preferred treatment method. Conventional treatment methods are inefficient in the removal of trace metals due to their design approach. This study evaluated the treatment performance of a constructed rhizofiltration system in the removal of heavy metals from municipal wastewater. The study was conducted at an eThekwni municipal wastewater treatment plant in Kingsburgh - Durban in the province of KwaZulu-Natal. The construction details of the pilot-scale rhizofiltration unit included three different layers of substrate consisting of medium stones, coarse gravel and fine sand. The system had one section planted with Phragmites australis L. and Kyllinga nemoralis L. while the other section was unplanted and acted as the control. Influent, effluent and sediment from the system were sampled and assessed for the presence of and removal of selected trace heavy metals using standard methods. Efficiency of metals removal was established by gauging the transfer of metals into leaves, roots and stem of the plants by calculations based on standard statistical packages. The Langmuir model was used to assess the heavy metal adsorption mechanisms of the plants. Heavy metals were accumulated in the entire rhizofiltration system at varying percentages of 96.69% on planted and 48.98% on control side for cadmium. Chromium was 81% and 24%, Copper was 23.4% and 1.1%, Nickel was 72% and 46.5, Lead was 63% and 31%, while Zinc was 76% and 84% on the on the water and sediment of the planted and control sides of the rhizofilter respectively. The decrease in metal adsorption efficiencies on the planted side followed the pattern of Cd>Cr>Zn>Ni>Pb>Cu and Ni>Cd>Pb>Cr>Cu>Zn on the control side. Confirmatory analysis using Electron Scanning Microscopy revealed that higher amounts of metals was deposited in the root system with values ranging from 0.015mg/kg (Cr), 0.250 (Cu), 0.030 (Pb) for P. australis, and 0.055mg/kg (Cr), 0.470mg/kg (Cu) and 0.210mg/kg,(Pb) for K. nemoralis respectively. The system was found to be efficient in removing and reducing metals from wastewater and further research is necessary to establish the immediate mechanisms that the plants display in order to achieve these reductions.

Keywords: wastewater treatment, Phragmites australis L., Kyllinga nemoralis L., heavy metals, pathogens, rhizofiltration

Procedia PDF Downloads 259
223 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin

Authors: Triveni Gogoi, Rima Chatterjee

Abstract:

Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.

Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs

Procedia PDF Downloads 222
222 Prenatal Paraben Exposure Impacts Infant Overweight Development and in vitro Adipogenesis

Authors: Beate Englich, Linda Schlittenbauer, Christiane Pfeifer, Isabel Kratochvil, Michael Borte, Gabriele I. Stangl, Martin von Bergen, Thorsten Reemtsma, Irina Lehmann, Kristin M. Junge

Abstract:

The worldwide production of endocrine disrupting compounds (EDC) has risen dramatically over the last decades, as so has the prevalence for obesity. Many EDCs are believed to contribute to this obesity epidemic, by enhancing adipogenesis or disrupting relevant metabolism. This effect is most tremendous in the early prenatal period when priming effects find a highly vulnerable time window. Therefore, we investigate the impact of parabens on childhood overweight development and adipogenesis in general. Parabens are ester of 4-hydroxy-benzoic acid and part of many cosmetic products or food packing. Therefore, ubiquitous exposure can be found in the westernized world, with exposure already starting during the sensitive prenatal period. We assessed maternal cosmetic product consumption, prenatal paraben exposure and infant BMI z-scores in the prospective German LINA cohort. In detail, maternal urinary concentrations (34 weeks of gestation) of methyl paraben (MeP), ethyl paraben (EtP), n-propyl paraben (PrP) and n-butyl paraben (BuP) were quantified using UPLC-MS/MS. Body weight and height of their children was assessed during annual clinical visits. Further, we investigated the direct influence of those parabens on adipogenesis in-vitro using a human mesenchymal stem cell (MSC) differentiation assay to mimic a prenatal exposure scenario. MSC were exposed to 0.1 – 50 µM paraben during the entire differentiation period. Differentiation outcome was monitored by impedance spectrometry, real-time PCR and triglyceride staining. We found that maternal cosmetic product consumption was highly correlated with urinary paraben concentrations at pregnancy. Further, prenatal paraben exposure was linked to higher BMI Z-scores in children. Our in-vitro analysis revealed that especially the long chained paraben BuP stimulates adipogenesis by increasing the expression of adipocyte specific genes (PPARγ, ADIPOQ, LPL, etc.) and triglyceride storage. Moreover, we found that adiponectin secretion is increased whereas leptin secretion is reduced under BuP exposure in-vitro. Further mechanistic analysis for receptor binding and activation of PPARγ and other key players in adipogenesis are currently in process. We conclude that maternal cosmetic product consumption is linked to prenatal paraben exposure of children and contributes to the development of infant overweight development by triggering key pathways of adipogenesis.

Keywords: adipogenesis, endocrine disruptors, paraben, prenatal exposure

Procedia PDF Downloads 268
221 Lithium and Sodium Ion Capacitors with High Energy and Power Densities based on Carbons from Recycled Olive Pits

Authors: Jon Ajuria, Edurne Redondo, Roman Mysyk, Eider Goikolea

Abstract:

Hybrid capacitor configurations are now of increasing interest to overcome the current energy limitations of supercapacitors entirely based on non-Faradaic charge storage. Among them, Li-ion capacitors including a negative battery-type lithium intercalation electrode and a positive capacitor-type electrode have achieved tremendous progress and have gone up to commercialization. Inexpensive electrode materials from renewable sources have recently received increased attention since cost is a persistently major criterion to make supercapacitors a more viable energy solution, with electrode materials being a major contributor to supercapacitor cost. Additionally, Na-ion battery chemistries are currently under development as less expensive and accessible alternative to Li-ion based battery electrodes. In this work, we are presenting both lithium and sodium ion capacitor (LIC & NIC) entirely based on electrodes prepared from carbon materials derived from recycled olive pits. Yearly, around 1 million ton of olive pit waste is generated worldwide, of which a third originates in the Spanish olive oil industry. On the one hand, olive pits were pyrolized at different temperatures to obtain a low specific surface area semigraphitic hard carbon to be used as the Li/Na ion intercalation (battery-type) negative electrode. The best hard carbon delivers a total capacity of 270mAh/g vs Na/Na+ in 1M NaPF6 and 350mAh/g vs Li/Li+ in 1M LiPF6. On the other hand, the same hard carbon is chemically activated with KOH to obtain high specific surface area -about 2000 m2g-1- activated carbon that is further used as the ion-adsorption (capacitor-type) positive electrode. In a voltage window of 1.5-4.2V, activated carbon delivers a specific capacity of 80 mAh/g vs. Na/Na+ and 95 mAh/g vs. Li/Li+ at 0.1A /g. Both electrodes were assembled in the same hybrid cell to build a LIC/NIC. For comparison purposes, a symmetric EDLC supercapacitor cell using the same activated carbon in 1.5M Et4NBF4 electrolyte was also built. Both LIC & NIC demonstrates considerable improvements in the energy density over its EDLC counterpart, delivering a maximum energy density of 110Wh/Kg at a power density of 30W/kg AM and a maximum power density of 6200W/Kg at an energy density of 27 Wh/Kg in the case of NIC and a maximum energy density of 110Wh/Kg at a power density of 30W/kg and a maximum power density of 18000W/Kg at an energy density of 22 Wh/Kg in the case of LIC. In conclusion, our work demonstrates that the same biomass waste can be adapted to offer a hybrid capacitor/battery storage device overcoming the limited energy density of corresponding double layer capacitors.

Keywords: hybrid supercapacitor, Na-Ion capacitor, supercapacitor, Li-Ion capacitor, EDLC

Procedia PDF Downloads 197
220 A Study of Semantic Analysis of LED Illustrated Traffic Directional Arrow in Different Style

Authors: Chia-Chen Wu, Chih-Fu Wu, Pey-Weng Lien, Kai-Chieh Lin

Abstract:

In the past, the most comprehensively adopted light source was incandescent light bulbs, but with the appearance of LED light sources, traditional light sources have been gradually replaced by LEDs because of its numerous superior characteristics. However, many of the standards do not apply to LEDs as the two light sources are characterized differently. This also intensifies the significance of studies on LEDs. As a Kansei design study investigating the visual glare produced by traffic arrows implemented with LEDs, this study conducted a semantic analysis on the styles of traffic arrows used in domestic and international occasions. The results will be able to reduce drivers’ misrecognition that results in the unsuccessful arrival at the destination, or in traffic accidents. This study started with a literature review and surveyed the status quo before conducting experiments that were divided in two parts. The first part involved a screening experiment of arrow samples, where cluster analysis was conducted to choose five representative samples of LED displays. The second part was a semantic experiment on the display of arrows using LEDs, where the five representative samples and the selected ten adjectives were incorporated. Analyzing the results with Quantification Theory Type I, it was found that among the composition of arrows, fletching was the most significant factor that influenced the adjectives. In contrast, a “no fletching” design was more abstract and vague. It lacked the ability to convey the intended message and might bear psychological negative connotation including “dangerous,” “forbidden,” and “unreliable.” The arrow design consisting of “> shaped fletching” was found to be more concrete and definite, showing positive connotation including “safe,” “cautious,” and “reliable.” When a stimulus was placed at a farther distance, the glare could be significantly reduced; moreover, the visual evaluation scores would be higher. On the contrary, if the fletching and the shaft had a similar proportion, looking at the stimuli caused higher evaluation at a closer distance. The above results will be able to be applied to the design of traffic arrows by conveying information definitely and rapidly. In addition, drivers’ safety could be enhanced by understanding the cause of glare and improving visual recognizability.

Keywords: LED, arrow, Kansei research, preferred imagery

Procedia PDF Downloads 244
219 Enhancing Academic and Social Skills of Elementary School Students with Autism Spectrum Disorder by an Intensive and Comprehensive Teaching Program

Authors: Piyawan Srisuruk, Janya Boonmeeprasert, Romwarin Gamlunglert, Benjamaporn Choikhruea, Ornjira Jaraepram, Jarin Boonsuchat, Sakdadech Singkibud, Kusalaporn Chaiudomsom, Chanatiporn Chonprai, Pornchanaka Tana, Suchat Paholpak

Abstract:

Objective: To develop an Intensive and comprehensive program (ICP) for the Inclusive Class Teacher (ICPICT) to teach elementary students (ES) with ASD in order to enhance the students’ academic and social skills (ASS) and to study the effect of the teaching program. Methods: The purposive sample included 15 Khon Kaen inclusive class teachers and their 15 elementary students. All the students were diagnosed by a child and adolescent psychiatrist to have DSM-5 level 1 ASD. The study tools included 1) an ICP to teach teachers about ASD, a teaching method to enhance academic and social skills for ES with ASD, and an assessment tool to assess the teacher’s knowledge before and after the ICP. 2) an ICPICT to teach ES with ASD to enhance their ASS. The project taught 10 sessions, 3 hours each. The ICPICT had its teaching structure. Teaching media included: pictures, storytelling, songs, and plays. The authors taught and demonstrated to the participant teachers how to teach with the ICPICT until the participants could display the correct teaching method. Then the teachers taught ICPICT at school by themselves 3) an assessment tool to assess the students’ ASS before and after the completion of the study. The ICP to teach the teachers, the ICPICT, and the relevant assessment tools were developed by the authors and were adjusted until consensus agreed as appropriate for researching by 3 curriculum of teaching children with ASD experts. The data were analyzed by descriptive and analytic statistics via SPSS version 26. Results: After the briefing, the teachers increased the mean score, though not with statistical significance, of knowledge of ASD and how to teach ES with ASD on ASS (p = 0.13). Teaching ES with ASD with the ICPICT could increase the mean scores of the students’ skills in learning and expressing social emotions, relationships with a friend, transitioning, and skills in academic function 3.33, 2.27, 2.94, and 3.00 scores (full scores were 18, 12, 15 and 12, Paired T-Test p = 0.007, 0.013, 0.028 and 0.003 respectively). Conclusion: The program to teach academic and social skills simultaneously in an intensive and comprehensive structure could enhance both the academic and social skills of elementary students with ASD. Keywords: Elementary students, autism spectrum, academic skill, social skills, intensive program, comprehensive program, integration.

Keywords: academica and social skills, students with autism, intensive and comprehensive, teaching program

Procedia PDF Downloads 62
218 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 251
217 Prediction of Sound Transmission Through Framed Façade Systems

Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul

Abstract:

With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.

Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system

Procedia PDF Downloads 57
216 Braille Code Matrix

Authors: Mohammed E. A. Brixi Nigassa, Nassima Labdelli, Ahmed Slami, Arnaud Pothier, Sofiane Soulimane

Abstract:

According to the world health organization (WHO), there are almost 285 million people with visual disability, 39 million of these people are blind. Nevertheless, there is a code for these people that make their life easier and allow them to access information more easily; this code is the Braille code. There are several commercial devices allowing braille reading, unfortunately, most of these devices are not ergonomic and too expensive. Moreover, we know that 90 % of blind people in the world live in low-incomes countries. Our contribution aim is to concept an original microactuator for Braille reading, as well as being ergonomic, inexpensive and lowest possible energy consumption. Nowadays, the piezoelectric device gives the better actuation for low actuation voltage. In this study, we focus on piezoelectric (PZT) material which can bring together all these conditions. Here, we propose to use one matrix composed by six actuators to form the 63 basic combinations of the Braille code that contain letters, numbers, and special characters in compliance with the standards of the braille code. In this work, we use a finite element model with Comsol Multiphysics software for designing and modeling this type of miniature actuator in order to integrate it into a test device. To define the geometry and the design of our actuator, we used physiological limits of perception of human being. Our results demonstrate in our study that piezoelectric actuator could bring a large deflection out-of-plain. Also, we show that microactuators can exhibit non uniform compression. This deformation depends on thin film thickness and the design of membrane arm. The actuator composed of four arms gives the higher deflexion and it always gives a domed deformation at the center of the deviceas in case of the Braille system. The maximal deflection can be estimated around ten micron per Volt (~ 10µm/V). We noticed that the deflection according to the voltage is a linear function, and this deflection not depends only on the voltage the voltage, but also depends on the thickness of the film used and the design of the anchoring arm. Then, we were able to simulate the behavior of the entire matrix and thus display different characters in Braille code. We used these simulations results to achieve our demonstrator. This demonstrator is composed of a layer of PDMS on which we put our piezoelectric material, and then added another layer of PDMS to isolate our actuator. In this contribution, we compare our results to optimize the final demonstrator.

Keywords: Braille code, comsol software, microactuators, piezoelectric

Procedia PDF Downloads 353
215 The Impact of Artificial Intelligence on Digital Factory

Authors: Mona Awad Wanis Gad

Abstract:

The method of factory making plans has changed loads, in particular, whilst it's miles approximately making plans the factory building itself. Factory making plans have the venture of designing merchandise, plants, tactics, organization, regions, and the construction of a factory. Ordinary restructuring is turning into greater essential for you to preserve the competitiveness of a manufacturing unit. Regulations in new regions, shorter lifestyle cycles of product and manufacturing era, in addition to a VUCA global (Volatility, Uncertainty, Complexity and Ambiguity) cause extra common restructuring measures inside a factory. A digital factory model is the planning foundation for rebuilding measures and turns into a critical device. Furthermore, digital building fashions are increasingly being utilized in factories to help facility management and manufacturing processes. First, exclusive styles of digital manufacturing unit fashions are investigated, and their residences and usabilities to be used instances are analyzed. Within the scope of research are point cloud fashions, building statistics fashions, photogrammetry fashions, and those enriched with sensor information are tested. It investigated which digital fashions permit a simple integration of sensor facts and in which the variations are. In the end, viable application areas of virtual manufacturing unit models are determined by a survey, and the respective digital manufacturing facility fashions are assigned to the application areas. Ultimately, an application case from upkeep is selected and implemented with the assistance of the best virtual factory version. It is shown how a completely digitalized preservation process can be supported by a digital manufacturing facility version by offering facts. Among different functions, the virtual manufacturing facility version is used for indoor navigation, facts provision, and display of sensor statistics. In summary, the paper suggests a structuring of virtual factory fashions that concentrates on the geometric representation of a manufacturing facility building and its technical facilities. A practical application case is proven and implemented. For that reason, the systematic selection of virtual manufacturing facility models with the corresponding utility cases is evaluated.

Keywords: augmented reality, digital factory model, factory planning, restructuring digital factory model, photogrammetry, factory planning, restructuring building information modeling, digital factory model, factory planning, maintenance

Procedia PDF Downloads 23
214 Immersed in Design: Using an Immersive Teaching Space to Visualize Design Solutions

Authors: Lisa Chandler, Alistair Ward

Abstract:

A significant component of design pedagogy is the need to foster design thinking in various contexts and to support students in understanding links between educational exercises and their potential application in professional design practice. It is also important that educators provide opportunities for students to engage with new technologies and encourage them to imagine applying their design skills for a range of outcomes. Problem solving is central to design so it is also essential that students understand that there can be multiple solutions to a design brief, and are supported in undertaking creative experimentation to generate imaginative outcomes. This paper presents a case study examining some innovative approaches to addressing these elements of design pedagogy. It investigates the effectiveness of the Immerse Lab, a three wall projection room at the University of the Sunshine Coast, Australia, as a learning context for design practice, for generating ideas and for supporting learning involving the comparative display of design outcomes. The project required first year design students to create a simple graphic design derived from an ordinary object and to incorporate specific design criteria. Utilizing custom-designed software, the students’ solutions were projected together onto the Immerse walls to create a large-scale, immersive grid of images, which was used to compare and contrast various responses to the same problem. The software also enabled individual student designs to be transformed, multiplied and enlarged in multiple ways and prompted discussions around the applicability of the designs in real world contexts. Teams of students interacted with their projected designs, brainstorming imaginative applications for their outcomes. Analysis of 77 anonymous student surveys revealed that the majority of students found: learning in the Immerse Lab to be beneficial; comparative review more effective than in standard tutorial rooms; that the activity generated new ideas; it encouraged students to think differently about their designs; it inspired students to develop their existing designs or create new ones. The project demonstrates that curricula involving immersive spaces can be effective in supporting engaging and relevant design pedagogy and might be utilized in other disciplinary areas.

Keywords: design pedagogy, immersive education, technology-enhanced learning, visualization

Procedia PDF Downloads 254
213 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence

Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej

Abstract:

In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.

Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction

Procedia PDF Downloads 102
212 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 411
211 Time of Death Determination in Medicolegal Death Investigations

Authors: Michelle Rippy

Abstract:

Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.

Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic

Procedia PDF Downloads 115
210 Evaluation Of Reservoir Quality In Cretaceous Sandstone Complex, Western Flank Of Anambra Basin, Southern Nigeria

Authors: Bayole Omoniyi

Abstract:

This study demonstrates the value of outcrops as analogues for evaluating reservoir quality of sandbody in a typical high-sinuosity fluvial system. The study utilized data acquired from selected outcrops in the Campanian-Maastrichtian siliciclastic succession of the western flank of Anambra Basin, southern Nigeria. Textural properties derived from outcrop samples were correlated and compared with porosity and permeability using established standard charts. Porosity was estimated from thin sections of selected samples to reduce uncertainty in the estimates. Following facies classification, 14 distinct facies were grouped into three facies associations (FA1-FA3) and were subsequently modeled as discrete properties in a block-centered Cartesian grid on a scale that captures geometry of principal sandbodies. Porosity and permeability estimated from charts were populated in the grid using comparable geostatistical techniques that reflect their spatial distribution. The resultant models were conditioned to facies property to honour available data. The results indicate a strong control of geometrical parameters on facies distribution, lateral continuity and connectivity with resultant effect on porosity and permeability distribution. Sand-prone FA1 and FA2 display reservoir quality that varies internally from channel axis to margin in each succession. Furthermore, isolated stack pattern of sandbodies reduces static connectivity and thus, increases risk of poor communication between reservoir-quality sandbodies. FA3 is non-reservoir because it is mud-prone. In conclusion, the risk of poor communication between sandbodies may be effectively accentuated in reservoirs that have similar architecture because of thick lateral accretion deposits, usually mudstone, that tend to disconnect good-quality point-bar sandbodies. In such reservoirs, mudstone may act as a barrier to impede flow vertically from one sandbody to another and laterally at the margins of each channel-fill succession in the system. The development plan, therefore, must be designed to effectively mitigate these risks and the risk of stratigraphic compartmentalization for maximum hydrocarbon recovery.

Keywords: analogues, architecture, connectivity, fluvial

Procedia PDF Downloads 18
209 Ultra-Sensitive Point-Of-Care Detection of PSA Using an Enzyme- and Equipment-Free Microfluidic Platform

Authors: Ying Li, Rui Hu, Shizhen Chen, Xin Zhou, Yunhuang Yang

Abstract:

Prostate cancer is one of the leading causes of cancer-related death among men. Prostate-specific antigen (PSA), a specific product of prostatic epithelial cells, is an important indicator of prostate cancer. Though PSA is not a specific serum biomarker for the screening of prostate cancer, it is recognized as an indicator for prostate cancer recurrence and response to therapy for patient’s post-prostatectomy. Since radical prostatectomy eliminates the source of PSA production, serum PSA levels fall below 50 pg/mL, and may be below the detection limit of clinical immunoassays (current clinical immunoassay lower limit of detection is around 10 pg/mL). Many clinical studies have shown that intervention at low PSA levels was able to improve patient outcomes significantly. Therefore, ultra-sensitive and precise assays that can accurately quantify extremely low levels of PSA (below 1-10 pg/mL) will facilitate the assessment of patients for the possibility of early adjuvant or salvage treatment. Currently, the commercially available ultra-sensitive ELISA kit (not used clinically) can only reach a detection limit of 3-10 pg/mL. Other platforms developed by different research groups could achieve a detection limit as low as 0.33 pg/mL, but they relied on sophisticated instruments to get the final readout. Herein we report a microfluidic platform for point-of-care (POC) detection of PSA with a detection limit of 0.5 pg/mL and without the assistance of any equipment. This platform is based on a previously reported volumetric-bar-chart chip (V-Chip), which applies platinum nanoparticles (PtNPs) as the ELISA probe to convert the biomarker concentration to the volume of oxygen gas that further pushes the red ink to form a visualized bar-chart. The length of each bar is used to quantify the biomarker concentration of each sample. We devised a long reading channel V-Chip (LV-Chip) in this work to achieve a wide detection window. In addition, LV-Chip employed a unique enzyme-free ELISA probe that enriched PtNPs significantly and owned 500-fold enhanced catalytic ability over that of previous V-Chip, resulting in a significantly improved detection limit. LV-Chip is able to complete a PSA assay for five samples in 20 min. The device was applied to detect PSA in 50 patient serum samples, and the on-chip results demonstrated good correlation with conventional immunoassay. In addition, the PSA levels in finger-prick whole blood samples from healthy volunteers were successfully measured on the device. This completely stand-alone LV-Chip platform enables convenient POC testing for patient follow-up in the physician’s office and is also useful in resource-constrained settings.

Keywords: point-of-care detection, microfluidics, PSA, ultra-sensitive

Procedia PDF Downloads 106
208 Trajectories of Conduct Problems and Cumulative Risk from Early Childhood to Adolescence

Authors: Leslie M. Gutman

Abstract:

Conduct problems (CP) represent a major dilemma, with wide-ranging and long-lasting individual and societal impacts. Children experience heterogeneous patterns of conduct problems; based on the age of onset, developmental course and related risk factors from around age 3. Early childhood represents a potential window for intervention efforts aimed at changing the trajectory of early starting conduct problems. Using the UK Millennium Cohort Study (n = 17,206 children), this study (a) identifies trajectories of conduct problems from ages 3 to 14 years and (b) assesses the cumulative and interactive effects of individual, family and socioeconomic risk factors from ages 9 months to 14 years. The same factors according to three domains were assessed, including child (i.e., low verbal ability, hyperactivity/inattention, peer problems, emotional problems), family (i.e., single families, parental poor physical and mental health, large family size) and socioeconomic (i.e., low family income, low parental education, unemployment, social housing). A cumulative risk score for the child, family, and socioeconomic domains at each age was calculated. It was then examined how the cumulative risk scores explain variation in the trajectories of conduct problems. Lastly, interactive effects among the different domains of cumulative risk were tested. Using group-based trajectory modeling, four distinct trajectories were found including a ‘low’ problem group and three groups showing childhood-onset conduct problems: ‘school-age onset’; ‘early-onset, desisting’; and ‘early-onset, persisting’. The ‘low’ group (57% of the sample) showed a low probability of conducts problems, close to zero, from 3 to 14 years. The ‘early-onset, desisting’ group (23% of the sample) demonstrated a moderate probability of CP in early childhood, with a decline from 3 to 5 years and a low probability thereafter. The ‘early-onset, persistent’ group (8%) followed a high probability of conduct problems, which declined from 11 years but was close to 70% at 14 years. In the ‘school-age onset’ group, 12% of the sample showed a moderate probability of conduct problems from 3 and 5 years, with a sharp increase by 7 years, increasing to 50% at 14 years. In terms of individual risk, all factors increased the likelihood of being in the childhood-onset groups compared to the ‘low’ group. For cumulative risk, the socioeconomic domain at 9 months and 3 years, the family domain at all ages except 14 years and child domain at all ages were found to differentiate childhood-onset groups from the ‘low’ group. Cumulative risk at 9 months and 3 years did not differentiate between the ‘school-onset’ group and ‘low’ group. Significant interactions were found between the domains for the ‘early-onset, desisting group’ suggesting that low levels of risk in one domain may buffer the effects of high risk in another domain. The implications of these findings for preventive interventions will be highlighted.

Keywords: conduct problems, cumulative risk, developmental trajectories, early childhood, adolescence

Procedia PDF Downloads 248
207 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 352
206 Assessment of Indoor Air Pollution in Naturally Ventilated Dwellings of Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air pollution as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. This study attempts to demonstrate the causal relationship between the indoor air pollution and its determining aspects. Detailed indoor air pollution audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavourable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group in the urban area of the metropolitan. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses the relationship between indoor air pollution levels and factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, air pollution, architecture, urban environment

Procedia PDF Downloads 102
205 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom

Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr

Abstract:

The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.

Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation

Procedia PDF Downloads 268
204 Sedimentological and Petrographical Studies on the Cored samples from Bentiu Formation Muglad Basin

Authors: Yousif M. Makeen

Abstract:

This study presents the results of the sedimentological and petrographical analyses on the cored samples from the Bentiu Formation. The cored intervals consist of thick beds of sandstone, which are sometimes intercalated with beds of fine-grained sandstone and, in a minor case, with a siltstone bed. Detailed sedimentological facies analysis revealed the presence of six facies types, which can be clarified in order of their great percentage occurrences as follows: (i) Massive sandstone, (ii) Planar cross-bedded sandstone, (iii) Trough cross-bedded sandstone, (iv) Fine laminated sandstone (v) Fine laminated siltstone and (vi) Horizontally parted sandstone. The petrographical analyses under the plane polarized microscope and the scanning electron microscope (SEM) for the sandstone lithofacies types that exist within the cored intervals allowed classifying these lithofacies into Kaolinitic Subfeldspathic Arenites. Among the detrital components, quartz grains are the most abundant (mainly monocrystalline quartz), followed by feldspars, micas, detrital and authigenic clays, and carbonaceous debris. However, traces of lithic fragments, iron oxides and heavy minerals were observed in some of the analyzed samples, where they occur in minor amounts. Kaolinite is present mainly as an authigenic component in most of the analyzed samples, while quartz overgrowths occur in variable amounts in most of the investigated samples. Carbonates (calcite & siderite) are present in considerable amounts. The grain roundness in most of the investigated sandstone samples ranges from well-rounded to round, and, in fewer samples, is sub-angular to angular. Most of the sandstone samples are moderately compacted and display point, concavo-convex and long grain contacts, whereas the sutured grain contacts, which reflect a higher degree of compaction, are relatively observed in lesser amounts, while the float grain contact has also been observed in minor quantity. Pore types in the analyzed samples are dominantly primary and secondary interparticle forms. Point-counted porosity values range from 19.6% to 30%. Average pore sizes are highly variable and range from 20 to 350 microns. Pore interconnectivity ranges from good to very good.

Keywords: sandstone, sedimentological facies, porosity, quartz overgrowths

Procedia PDF Downloads 44
203 The Right to Water in the Lancang-Mekong River Basin Disputes

Authors: Heping Dang, Raymond Yu Wang

Abstract:

The Langcang-Mekong River is the most important international watercourse in mainland Southeast Asia. In recent years, the six riparian states, China, Myanmar, Laos, Thailand, Cambodia and Vietnam, have confronted increasing disputes over the use of the trans-boundary water. To settle these disputes and protect the fundamental right to water, quite a few inter-state mechanisms have been established, such as the Mekong River Commission, the economic cooperation program of the Greater Mekong Subregion, the ‘Belt and Road Initiative’ and the ‘Lancang-Mekong Cooperation Mechanism’ and the ‘Lower Mekong Initiative’. Non-Governmental Organizations (NGOs) have also been an important and constructive institutional entrepreneur in trans-boundary water governance. Although the status and extent of the right to water are yet to be clearly defined, this paper aims to 1) unpack how the right to water is interpreted and exercised in the Lancang-Mekong River Basin Dispute; and 2) to evaluate the roles of the right to water in settling international water disputes. To achieve these objectives, Secondary data such as archival documents of international law and relevant stakeholders will be compiled for analysis. First-hand information about the organizational structure, accountability, values and strategies of the international mechanisms and NGOs in question will also be collected through fieldwork in the Mekong river basin. Semi-structural interviews, group discussions and participatory observation will be conducted to collect data. The authors have access to the fieldwork because they have abundant experience of collaborating with Mekong-based international NGOs in previous research projects. This research will display how the concepts and principles of international law and the UN guidelines are interpreted in practice. These principles include the definition and extent of the right to water, the practical use of ‘vital human need’, the indicators of ‘adequacy of water’ including ‘availability, quality and accessibility’, and how the right to water is related to the progressive realization of the right to life. This down-to earth research will enrich the theoretical discussion of international law, particularly international human rights law, within the UN framework. Moreover, the outcomes of this research will provide new insights into the roles that the right to water might play in consensus-building and dispute settlement in a rapidly changing context, where water is pivotal for poverty alleviation, biodiversity conservation and the promotion of sustainable livelihoods.

Keywords: international water dispute, Lancang-Mekong River, right to water, state and non-state actors

Procedia PDF Downloads 275
202 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model

Authors: A. Shakoor, M. Arshad

Abstract:

The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.

Keywords: groundwater quality, groundwater management, PMWIN, MT3D model

Procedia PDF Downloads 374
201 Identification and Management of Septic Arthritis of the Untouched Glenohumeral Joint

Authors: Sumit Kanwar, Manisha Chand, Gregory Gilot

Abstract:

Background: Septic arthritis of the shoulder has infrequently been discussed. Focus on infection of the untouched shoulder has not heretofore been described. We present four patients with glenohumeral septic arthritis. Methods: Case 1: A 59 year old male with left shoulder pain in the anterior, posterior and superior aspects. Case 2: A 60 year old male with fever, chills, and generalized muscle aches. Case 3: A 70 year old male with right shoulder pain about the anterior and posterior aspects. Case 4: A 55 year old male with global right shoulder pain, swelling, and limited ROM. Results: In case 1, the left shoulder was affected. Physical examination, swelling was notable, there was global tenderness with a painful range of motion (ROM). The lab values indicated an erythrocyte sedimentation rate (ESR) of 96, and a C-reactive protein (CRP) of 304.30. Imaging studies were performed and MRI indicated a high suspicion for an abscess with osteomyelitis of the humeral head. Our second case’s left arm was affected. He had swelling, global tenderness and painful ROM. His ESR was 38, CRP was 14.9. X-ray showed severe arthritis. Case 3 differed with the right arm being affected. Again, global tenderness and painful ROM was observed. His ESR was 94, and CRP was 10.6. X-ray displayed an eroded glenoid space. Our fourth case’s right shoulder was affected. He had global tenderness and painful, limited ROM. ESR was 108 and CRP was 2.4. X-ray was non-significant. Discussion: Monoarticular septic arthritis of the virgin glenohumeral joint is seldom diagnosed in clinical practice. Common denominators include elevated ESR, painful, limited ROM, and involvement of the dominant arm. The male population is more frequently affected with an average age of 57. Septic arthritis is managed with incision and drainage or needle aspiration of synovial fluid supplemented with 3-6 weeks of intravenous antibiotics. Due to better irrigation and joint visualization, arthroscopy is preferred. Open surgical drainage may be indicated if the above methods fail. Conclusion: If a middle-aged male presents with vague anterior or posterior shoulder pain, elevated inflammatory markers and a low grade fever, an x-ray should be performed. If this displays degenerative joint disease, the complete further workup with advanced imaging, such as an MRI, CT scan, or an ultrasound. If these imaging modalities display anterior space joint effusion with soft tissue involvement, we can suspect septic arthritis of the untouched glenohumeral joint and surgery is indicated.

Keywords: glenohumeral joint, identification, infection, septic arthritis, shoulder

Procedia PDF Downloads 419
200 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor

Authors: Sourabh Jain, S. S. Jain

Abstract:

Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.

Keywords: ITS strategies, congestion, planning, mobility, safety

Procedia PDF Downloads 175
199 Application of Ground-Penetrating Radar in Environmental Hazards

Authors: Kambiz Teimour Najad

Abstract:

The basic methodology of GPR involves the use of a transmitting antenna to send electromagnetic waves into the subsurface, which then bounce back to the surface and are detected by a receiving antenna. The transmitter and receiver antennas are typically placed on the ground surface and moved across the area of interest to create a profile of the subsurface. The GPR system consists of a control unit that powers the antennas and records the data, as well as a display unit that shows the results of the survey. The control unit sends a pulse of electromagnetic energy into the ground, which propagates through the soil or rock until it encounters a change in material or structure. When the electromagnetic wave encounters a buried object or structure, some of the energy is reflected back to the surface and detected by the receiving antenna. The GPR data is then processed using specialized software that analyzes the amplitude and travel time of the reflected waves. By interpreting the data, GPR can provide information on the depth, location, and nature of subsurface features and structures. GPR has several advantages over other geophysical survey methods, including its ability to provide high-resolution images of the subsurface and its non-invasive nature, which minimizes disruption to the site. However, the effectiveness of GPR depends on several factors, including the type of soil or rock, the depth of the features being investigated, and the frequency of the electromagnetic waves used. In environmental hazard assessments, GPR can be used to detect buried structures, such as underground storage tanks, pipelines, or utilities, which may pose a risk of contamination to the surrounding soil or groundwater. GPR can also be used to assess soil stability by identifying areas of subsurface voids or sinkholes, which can lead to the collapse of the surface. Additionally, GPR can be used to map the extent and movement of groundwater contamination, which is critical in designing effective remediation strategies. the methodology of GPR in environmental hazard assessments involves the use of electromagnetic waves to create high of the subsurface, which are then analyzed to provide information on the depth, location, and nature of subsurface features and structures. This information is critical in identifying and mitigating environmental hazards, and the non-invasive nature of GPR makes it a valuable tool in this field.

Keywords: GPR, hazard, landslide, rock fall, contamination

Procedia PDF Downloads 75
198 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering

Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher

Abstract:

Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.

Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing

Procedia PDF Downloads 166
197 Synthesis of Porphyrin-Functionalized Beads for Flow Cytometry

Authors: William E. Bauta, Jennifer Rebeles, Reggie Jacob

Abstract:

Porphyrins are noteworthy in biomedical science for their cancer tissue accumulation and photophysical properties. The preferential accumulation of some porphyrins in cancerous tissue has been known for many years. This, combined with their characteristic photophysical and photochemical properties, including their strong fluorescence and their ability to generate reactive oxygen species in vivo upon laser irradiation, has led to much research into the application of porphyrins as cancer diagnostic and therapeutic agents. Porphyrins have been used as dyes to detect cancer cells both in vivo and, less commonly, in vitro. In one example, human sputum samples from lung cancer patients and patients without the disease were dissociated and stained with the porphyrin TCPP (5,10,15,20-tetrakis-(4-carboxyphenyl)-porphine). Cells were analyzed by flow cytometry. Cancer samples were identified by their higher TCPP fluorescence intensity relative to the no-cancer controls. However, quantitative analysis of fluorescence in cell suspensions stained with multiple fluorophores requires particles stained with each of the individual fluorophores as controls. Fluorescent control particles must be compatible in size with flow cytometer fluidics and have favorable hydrodynamic properties in suspension. They must also display fluorescence comparable to the cells of interest and be stable upon storage amine-functionalized spherical polystyrene beads in the 5 to 20-micron diameter range that was reacted with TCPP and EDC in aqueous pH six buffer overnight to form amide bonds. Beads were isolated by centrifugation and tested by flow cytometry. The 10-micron amine-functionalized beads displayed the best combination of fluorescence intensity and hydrodynamic properties, such as lack of clumping and remaining in suspension during the experiment. These beads were further optimized by varying the stoichiometry of EDC and TCPP relative to the amine. The reaction was accompanied by the formation of a TCPP-related particulate, which was removed, after bead centrifugation, using a microfiltration process. The resultant TCPP-functionalized beads were compatible with flow cytometry conditions and displayed a fluorescence comparable to that of stained cells, which allowed their use as fluorescence standards. The beads were stable in refrigerated storage in the dark for more than eight months. This work demonstrates the first preparation of porphyrin-functionalized flow cytometry control beads.

Keywords: tetraaryl porphyrin, polystyrene beads, flow cytometry, peptide coupling

Procedia PDF Downloads 87
196 In vitro and in vivo Infectivity of Coxiella burnetii Strains from French Livestock

Authors: Joulié Aurélien, Jourdain Elsa, Bailly Xavier, Gasqui Patrick, Yang Elise, Leblond Agnès, Rousset Elodie, Sidi-Boumedine Karim

Abstract:

Q fever is a worldwide zoonosis caused by the gram-negative obligate intracellular bacterium Coxiella burnetii. Following the recent outbreaks in the Netherlands, a hyper virulent clone was found to be the cause of severe human cases of Q fever. In livestock, Q fever clinical manifestations are mainly abortions. Although the abortion rates differ between ruminant species, C. burnetii’s virulence remains understudied, especially in enzootic areas. In this study, the infectious potential of three C. burnetii isolates collected from French farms of small ruminants were compared to the reference strain Nine Mile (in phase II and in an intermediate phase) using an in vivo (CD1 mice) model. Mice were challenged with 105 live bacteria discriminated by propidium monoazide-qPCR targeting the icd-gene. After footpad inoculation, spleen and popliteal lymph node were harvested at 10 days post-inoculation (p.i). The strain invasiveness in spleen and popliteal nodes was assessed by qPCR assays targeting the icd-gene. Preliminary results showed that the avirulent strains (in phase 2) failed to pass the popliteal barrier and then to colonize the spleen. This model allowed a significant differentiation between strain’s invasiveness on biological host and therefore identifying distinct virulence profiles. In view of these results, we plan to go further by testing fifteen additional C. burnetii isolates from French farms of sheep, goat and cattle by using the above-mentioned in vivo model. All 15 strains display distant MLVA (multiple-locus variable-number of tandem repeat analysis) genotypic profiles. Five of the fifteen isolates will bee also tested in vitro on ovine and bovine macrophage cells. Cells and supernatants will be harvested at day1, day2, day3 and day6 p.i to assess in vitro multiplication kinetics of strains. In conclusion, our findings might help the implementation of surveillance of virulent strains and ultimately allow adapting prophylaxis measures in livestock farms.

Keywords: Q fever, invasiveness, ruminant, virulence

Procedia PDF Downloads 357