Search results for: mobile edge computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3226

Search results for: mobile edge computing

256 Cross-Cultural Collaboration Shaping Co-Creation Methodology to Enhance Disaster Risk Management Approaches

Authors: Jeannette Anniés, Panagiotis Michalis, Chrysoula Papathanasiou, Selby Knudsen

Abstract:

RiskPACC project aims to bring together researchers, practitioners, and first responders from nine European countries following a co-creation approach aiming to develop customised solutions to meet the needs of end-users. The co-creation workshops target to enhance the communication pathways between local civil protection authorities (CPAs) and citizens, in an effort to close the risk perception-action gap (RPAG). The participants in the workshops include a variety of stakeholders, as well as citizens, fostering the dialogue between the groups and supporting citizen participation in disaster risk management (DRM). The co-creation methodology in place implements co-design elements due to the integration of four ICT tools. Such ICT tools include web-based and mobile application technical solutions in different development stages, ranging from formulation and validation of concepts to pilot demonstrations. In total, seven different case studies are foreseen in RiskPACC. The workflow of the workshops is designed to be adaptive to every of the seven case study countries and their cultures’ particular needs. This work aims to provide an overview of the the preparation and the conduction of the workshops in which researchers and practitioners focused on mapping these different needs from the end users. The latter included first responders but also volunteers and citizens who actively participated in the co-creation workshops. The strategies to improve communication between CPAs and citizens themselves differ in the countries, and the modules of the co-creation methodology are adapted in response to such differences. Moreover, the project partners experienced how the structure of such workshops is perceived differently in the seven case studies. Therefore, the co-creation methodology itself is a design method underlying several iterations, which are eventually shaped by cross-cultural collaboration. For example, some case studies applied other modules according to the participatory group recruited. The participants were technical experts, teachers, citizens, first responders, or volunteers, among others. This work aspires to present the divergent approaches of the seven case studies implementing the co-creation methodology proposed, in response to different perceptions of the modules. An analysis of the adaptations and implications will also be provided to assess where the case studies’ objective of improving disaster resilience has been obtained.

Keywords: citizen participation, co-creation, disaster resilience, risk perception, ICT tools

Procedia PDF Downloads 54
255 Conceptualizing Personalized Learning: Review of Literature 2007-2017

Authors: Ruthanne Tobin

Abstract:

As our data-driven, cloud-based, knowledge-centric lives become ever more global, mobile, and digital, educational systems everywhere are struggling to keep pace. Schools need to prepare students to become critical-thinking, tech-savvy, life-long learners who are engaged and adaptable enough to find their unique calling in a post-industrial world of work. Recognizing that no nation can afford poor achievement or high dropout rates without jeopardizing its social and economic future, the thirty-two nations of the OECD are launching initiatives to redesign schools, generally under the banner of Personalized Learning or 21st Century Learning. Their intention is to transform education by situating students as co-enquirers and co-contributors with their teachers of what, when, and how learning happens for each individual. In this focused review of the 2007-2017 literature on personalized learning, the author sought answers to two main questions: “What are the theoretical frameworks that guide personalized learning?” and “What is the conceptual understanding of the model?” Ultimately, the review reveals that, although the research area is overly theorized and under-substantiated, it does provide a significant body of knowledge about this potentially transformative educational restructuring. For example, it addresses the following questions: a) What components comprise a PL model? b) How are teachers facilitating agency (voice & choice) in their students? c) What kinds of systems, processes and procedures are being used to guide the innovation? d) How is learning organized, monitored and assessed? e) What role do inquiry based models play? f) How do teachers integrate the three types of knowledge: Content, pedagogical and technological? g) Which kinds of forces enable, and which impede, personalizing learning? h) What is the nature of the collaboration among teachers? i) How do teachers co-regulate differentiated tasks? One finding of the review shows that while technology can dramatically expand access to information, expectations of its impact on teaching and learning are often disappointing unless the technologies are paired with excellent pedagogies in order to address students’ needs, interests and aspirations. This literature review fills a significant gap in this emerging field of research, as it serves to increase conceptual clarity that has hampered both the theorizing and the classroom implementation of a personalized learning model.

Keywords: curriculum change, educational innovation, personalized learning, school reform

Procedia PDF Downloads 198
254 An Interactive User-Oriented Approach to Optimizing Public Space Lighting

Authors: Tamar Trop, Boris Portnov

Abstract:

Public Space Lighting (PSL) of outdoor urban areas promotes comfort, defines spaces and neighborhood identities, enhances perceived safety and security, and contributes to residential satisfaction and wellbeing. However, if excessive or misdirected, PSL leads to unnecessary energy waste and increased greenhouse gas emissions, poses a non-negligible threat to the nocturnal environment, and may become a potential health hazard. At present, PSL is designed according to international, regional, and national standards, which consolidate best practice. Yet, knowledge regarding the optimal light characteristics needed for creating a perception of personal comfort and safety in densely populated residential areas, and the factors associated with this perception, is still scarce. The presented study suggests a paradigm shift in designing PSL towards a user-centered approach, which incorporates pedestrians' perspectives into the process. The study is an ongoing joint research project between China and Israel Ministries of Science and Technology. Its main objectives are to reveal inhabitants' perceptions of and preferences for PSL in different densely populated neighborhoods in China and Israel, and to develop a model that links instrumentally measured parameters of PSL (e.g., intensity, spectra and glare) with its perceived comfort and quality, while controlling for three groups of attributes: locational, temporal, and individual. To investigate measured and perceived PSL, the study employed various research methods and data collection tools, developed a location-based mobile application, and used multiple data sources, such as satellite multi-spectral night-time light imagery, census statistics, and detailed planning schemes. One of the study’s preliminary findings is that higher sense of safety in the investigated neighborhoods is not associated with higher levels of light intensity. This implies potential for energy saving in brightly illuminated residential areas. Study findings might contribute to the design of a smart and adaptive PSL strategy that enhances pedestrians’ perceived safety and comfort while reducing light pollution and energy consumption.

Keywords: energy efficiency, light pollution, public space lighting, PSL, safety perceptions

Procedia PDF Downloads 105
253 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 66
252 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 343
251 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery

Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović

Abstract:

Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.

Keywords: capsaicin, in vitro, patch, RP-LC, transdermal

Procedia PDF Downloads 202
250 The Relationship between Central Bank Independence and Inflation: Evidence from Africa

Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit

Abstract:

The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.

Keywords: central bank independence, inflation, macroeconomic variables, price stability

Procedia PDF Downloads 347
249 The Study of Cost Accounting in S Company Based on TDABC

Authors: Heng Ma

Abstract:

Third-party warehousing logistics has an important role in the development of external logistics. At present, the third-party logistics in our country is still a new industry, the accounting system has not yet been established, the current financial accounting system of third-party warehousing logistics is mainly in the traditional way of thinking, and only able to provide the total cost information of the entire enterprise during the accounting period, unable to reflect operating indirect cost information. In order to solve the problem of third-party logistics industry cost information distortion, improve the level of logistics cost management, the paper combines theoretical research and case analysis method to reflect cost allocation by building third-party logistics costing model using Time-Driven Activity-Based Costing(TDABC), and takes S company as an example to account and control the warehousing logistics cost. Based on the idea of “Products consume activities and activities consume resources”, TDABC put time into the main cost driver and use time-consuming equation resources assigned to cost objects. In S company, the objects focuses on three warehouse, engaged with warehousing and transportation (the second warehouse, transport point) service. These three warehouse respectively including five departments, Business Unit, Production Unit, Settlement Center, Security Department and Equipment Division, the activities in these departments are classified by in-out of storage forecast, in-out of storage or transit and safekeeping work. By computing capacity cost rate, building the time-consuming equation, the paper calculates the final operation cost so as to reveal the real cost. The numerical analysis results show that the TDABC can accurately reflect the cost allocation of service customers and reveal the spare capacity cost of resource center, verifies the feasibility and validity of TDABC in third-party logistics industry cost accounting. It inspires enterprises focus on customer relationship management and reduces idle cost to strengthen the cost management of third-party logistics enterprises.

Keywords: third-party logistics enterprises, TDABC, cost management, S company

Procedia PDF Downloads 330
248 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 135
247 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 184
246 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 137
245 Usage of Cyanobacteria in Battery: Saving Money, Enhancing the Storage Capacity, Making Portable, and Supporting the Ecology

Authors: Saddam Husain Dhobi, Bikrant Karki

Abstract:

The main objective of this paper is save money, balance ecosystem of the terrestrial organism, control global warming, and enhancing the storage capacity of the battery with requiring weight and thinness by using Cyanobacteria in the battery. To fulfill this purpose of paper we can use different methods: Analysis, Biological, Chemistry, theoretical and Physics with some engineering design. Using this different method, we can produce the special type of battery that has the long life, high storage capacity, and clean environment, save money so on and by using the byproduct of Cyanobacteria i.e. glucose. Cyanobacteria are a special type of bacteria that produces different types of extracellular glucoses and oxygen with the help of little sunlight, water, and carbon dioxide and can survive in freshwater, marine and in the land as well. In this process, O₂ is more in the comparison to plant due to rapid growth rate of Cyanobacteria. The required materials are easily available in this process to produce glucose with the help of Cyanobacteria. Since CO₂, is greenhouse gas that causes the global warming? We can utilize this gas and save our ecological balance and the byproduct (glucose) C₆H₁₂O₆ can be utilized for raw material for the battery where as O₂ escape is utilized by living organism. The glucose produce by Cyanobateria goes on Krebs's Cycle or Citric Acid Cycle, in which glucose is complete, oxidizes and all the available energy from glucose molecule has been release in the form of electron and proton as energy. If we use a suitable anodes and cathodes, we can capture these electrons and protons to produce require electricity current with the help of byproduct of Cyanobacteria. According to "Virginia Tech Bio-battery" and "Sony" 13 enzymes and the air is used to produce nearly 24 electrons from a single glucose unit. In this output power of 0.8 mW/cm, current density of 6 mA/cm, and energy storage density of 596 Ah/kg. This last figure is impressive, at roughly 10 times the energy density of the lithium-ion batteries in your mobile devices. When we use Cyanobacteria in battery, we are able to reduce Carbon dioxide, Stop global warming, and enhancing the storage capacity of battery more than 10 times that of lithium battery, saving money, balancing ecology. In this way, we can produce energy from the Cyanobacteria and use it in battery for different benefits. In addition, due to the mass, size and easy cultivation, they are better to maintain the size of battery. Hence, we can use Cyanobacteria for the battery having suitable size, enhancing the storing capacity of battery, helping the environment, portability and so on.

Keywords: anode, byproduct, cathode, cyanobacteri, glucose, storage capacity

Procedia PDF Downloads 318
244 Drug Susceptibility and Genotypic Assessment of Mycobacterial Isolates from Pulmonary Tuberculosis Patients in North East Ethiopia

Authors: Minwuyelet Maru, Solomon Habtemariam, Endalamaw Gadissa, Abraham Aseffa

Abstract:

Background: Tuberculosis is a major public health problem in Ethiopia. The burden of TB is aggravated by emergence and expansion of drug resistant tuberculosis and different lineages of Mycobacterium tuberculosis (M. tuberculosis) have been reported in many parts of the country. Describing strains of Mycobacterial isolates and drug susceptibility pattern is necessary. Method: Sputum samples were collected from smear positive pulmonary TB patients age >= 7 years between October 1, 2012 to September 30, 2013 and Mycobacterial strains isolated on Loweensten Jensen (LJ) media. Each strain was characterized by deletion typing and Spoligotyping. Drug sensitivity testing was determined with the indirect proportion method using Middle brook 7H10 media and association to determine possible risk factors to drug resistance was done. Result: A total of 144 smear positive pulmonary tuberculosis patients were enrolled. The age of participants ranged from 7 to 78 with mean age of 29.22 (±10.77) years. In this study 82.2% (n=97) of the isolates were sensitive to the four first line anti-tuberculosis drugs and resistance to any of the four drugs tested was 17.8% (n=21). A high frequency of any resistance was observed in isoniazid, 13.6%, (n=16) followed by streptomycin, 11.8% (n=14). No significant association of isoniazid resistance with HIV, sex and history of previous TB treatment was observed but there was significant association with age, high between 31-35 years of age (p=0.01). Majority, 89.9% (n=128) of participants were new cases and only 11.1% (n=16) had history of previous TB treatment. No MDR-TB from new cases and 2 MDRTB (13.3%) was isolated from re-treatment cases which was significantly associated with previous TB treatment (p<0.01). Thirty two different types of spoligotype patterns were identified and 74.1% were grouped in to 13 clusters. The dominant strains were SIT 25, 18.1% (n=21), SIT 53, 17.2% (n=20) and SIT 149, 8.6% (n=10). Lineage 4 is the predominant lineage followed by lineage 3 and lineage 7 comprising 65.5% (n=76), 28.4% (n=33) and 6% (n=7) respectively. Majority of strains from lineage 3 and 4 were SIT 25 (63.6%) and SIT 53 (26.3%) whereas SIT 343 was the dominant strain from lineage 7 (71.4%). Conclusion: Wide spread of lineage 3 and lineage 4 of the modern lineage and high number of strain cluster indicates high ongoing transmission. The high proportion resistance to any of the first line anti-tuberculosis drugs may be a potential source in the emergence of MDR-TB. Wide spread of SIT 25 and SIT 53 having a tendency of ease transmission and presence of higher resistance of isoniazid in working and mobile age group, 31-35 years of age may increase risk of drug resistant strains transmission.

Keywords: tuberculosis, drug susceptibility, strain diversity, lineage, Ethiopia, spoligotyping

Procedia PDF Downloads 350
243 Rupture Termination of the 1950 C. E. Earthquake and Recurrent Interval of Great Earthquake in North Eastern Himalaya, India

Authors: Rao Singh Priyanka, Jayangondaperumal R.

Abstract:

The Himalayan active fault has the potential to generate great earthquakes in the future, posing a biggest existential threat to humans in the Himalayan and adjacent region. Quantitative evaluation of accumulated and released interseismic strain is crucial to assess the magnitude and spatio-temporal variability of future great earthquakes along the Himalayan arc. To mitigate the destruction and hazards associated with such earthquakes, it is important to understand their recurrence cycle. The eastern Himalayan and Indo-Burman plate boundary systems offers an oblique convergence across two orthogonal plate boundaries, resulting in a zone of distributed deformation both within and away from the plate boundary and clockwise rotation of fault-bounded blocks. This seismically active region has poorly documented historical archive of the past large earthquakes. Thus, paleoseismologicalstudies confirm the surface rupture evidences of the great continental earthquakes (Mw ≥ 8) along the Himalayan Frontal Thrust (HFT), which along with the Geodetic studies, collectively provide the crucial information to understand and assess the seismic potential. These investigations reveal the rupture of 3/4th of the HFT during great events since medieval time but with debatable opinions for the timing of events due to unclear evidences, ignorance of transverse segment boundaries, and lack of detail studies. Recent paleoseismological investigations in the eastern Himalaya and Mishmi ranges confirms the primary surface ruptures of the 1950 C.E. great earthquake (M>8). However, a seismic gap exists between the 1714 C.E. and 1950 C.E. Assam earthquakes that did not slip since 1697 C.E. event. Unlike the latest large blind 2015 Gorkha earthquake (Mw 7.8), the 1950 C.E. event is not triggered by a large event of 1947 C.E. that occurred near the western edge of the great upper Assam event. Moreover, the western segment of the eastern Himalayadid not witness any surface breaking earthquake along the HFT for over the past 300 yr. The frontal fault excavations reveal that during the 1950 earthquake, ~3.1-m-high scarp along the HFT was formed due to the co-seismic slip of 5.5 ± 0.7 m at Pasighat in the Eastern Himalaya and a 10-m-high-scarp at a Kamlang Nagar along the Mishmi Thrust in the Eastern Himalayan Syntaxis is an outcome of a dip-slip displacement of 24.6 ± 4.6 m along a 25 ± 5°E dipping fault. This event has ruptured along the two orthogonal fault systems in the form of oblique thrust fault mechanism. Approx. 130 km west of Pasighat site, the Himebasti village has witnessed two earthquakes, the historical 1697 Sadiya earthquake, and the 1950 event, with a cumulative dip-slip displacement of 15.32 ± 4.69 m. At Niglok site, Arunachal Pradesh, a cumulative slip of ~12.82 m during at least three events since pre 19585 B.P. has produced ~6.2-m high scarp while the youngest scarp of ~2.4-m height has been produced during 1697 C.E. The site preserves two deformational events along the eastern HFT, providing an idea of last serial ruptures at an interval of ~850 yearswhile the successive surface rupturing earthquakes lacks in the Mishmi Range to estimate the recurrence cycle.

Keywords: paleoseismology, surface rupture, recurrence interval, Eastern Himalaya

Procedia PDF Downloads 64
242 Active Ageing a Way Forward to Healthy Ageing Among the Rural Elderly Women

Authors: Hannah Evangeline Sangeetha

Abstract:

Ageing is an inevitable change in the life span of an individual. India’s old age population has increased from 19 million in 1947 to 100 million in the 21st century. The United Nations World Population ageing reports that the grey population has immensely increased from 9.2% in 1990 to 11.7 % in 2013, and it’s expected to triple by the year 2050 growing from 737 million to over 2 billion persons 60 years of age and older. Ageing is a period of physical, mental and social decline which brings a host of challenges to the individual and the family. Hence it requires attention at the micro, mezzo and the macro levels of the society. The concepts of healthy and successful aging are being used to help people to change their negative attitude towards aging. This perspective is important to make people realize their potentialities to bring about a change in the minds of senior citizens as well as the society. The objective of this study was to understand the level of active ageing among the rural elderly women and its impact on the quality of life. 330 elderly women from 12 villages of Sriperumbudur associated with the Mobile medical care of Help age India were interviewed using census method. The study revealed the following findings; most respondents in this study were young old between the age group of 60 to 75 years. All the three major religious groups were represented, 85.5percent were Hindus. Majority of the respondents 73.3percent had no education. It was interesting to know that majority of the respondents were self reliant (83.94 percent) and 82.73 percent of them very independent and took care of them by themselves (activities of daily living) without any support from their families. 76.9 percent of the senior women worked based on their competencies, 75.5 percent of them were involved in plenty of activities everyday including their occupation and household chores, which enabled them to be physically active. The chi square values that there is a significant association between the overall active ageing score, religion &number of members in the family. The other demographic variables like age, occupation, income marital status, age at marriage, number of children in the family and Socio –Economic Status were not significantly associated with the overall active aging score. The p-value 0.032 showed Social network and being self-reliant are significantly associated. The study surprisingly shows that most women enjoyed freedom and Independence in their family which is a positive indicator of active ageing.

Keywords: active ageing, quality of life, independence, self reliance

Procedia PDF Downloads 120
241 Feasibility Study and Experiment of On-Site Nuclear Material Identification in Fukushima Daiichi Fuel Debris by Compact Neutron Source

Authors: Yudhitya Kusumawati, Yuki Mitsuya, Tomooki Shiba, Mitsuru Uesaka

Abstract:

After the Fukushima Daiichi nuclear power reactor incident, there are a lot of unaccountable nuclear fuel debris in the reactor core area, which is subject to safeguard and criticality safety. Before the actual precise analysis is performed, preliminary on-site screening and mapping of nuclear debris activity need to be performed to provide a reliable data on the nuclear debris mass-extraction planning. Through a collaboration project with Japan Atomic Energy Agency, an on-site nuclear debris screening system by using dual energy X-Ray inspection and neutron energy resonance analysis has been established. By using the compact and mobile pulsed neutron source constructed from 3.95 MeV X-Band electron linac, coupled with Tungsten as electron-to-photon converter and Beryllium as a photon-to-neutron converter, short-distance neutron Time of Flight measurement can be performed. Experiment result shows this system can measure neutron energy spectrum up to 100 eV range with only 2.5 meters Time of Flightpath in regards to the X-Band accelerator’s short pulse. With this, on-site neutron Time of Flight measurement can be used to identify the nuclear debris isotope contents through Neutron Resonance Transmission Analysis (NRTA). Some preliminary NRTA experiments have been done with Tungsten sample as dummy nuclear debris material, which isotopes Tungsten-186 has close energy absorption value with Uranium-238 (15 eV). The results obtained shows that this system can detect energy absorption in the resonance neutron area within 1-100 eV. It can also detect multiple elements in a material at once with the experiment using a combined sample of Indium, Tantalum, and silver makes it feasible to identify debris containing mixed material. This compact neutron Time of Flight measurement system is a great complementary for dual energy X-Ray Computed Tomography (CT) method that can identify atomic number quantitatively but with 1-mm spatial resolution and high error bar. The combination of these two measurement methods will able to perform on-site nuclear debris screening at Fukushima Daiichi reactor core area, providing the data for nuclear debris activity mapping.

Keywords: neutron source, neutron resonance, nuclear debris, time of flight

Procedia PDF Downloads 214
240 Analyzing the Perception of Social Networking Sites as a Learning Tool among University Students: Case Study of a Business School in India

Authors: Bhaskar Basu

Abstract:

Universities and higher education institutes are finding it increasingly difficult to engage students fruitfully through traditional pedagogic tools. Web 2.0 technologies comprising social networking sites (SNSs) offer a platform for students to collaborate and share information, thereby enhancing their learning experience. Despite the potential and reach of SNSs, its use has been limited in academic settings promoting higher education. The purpose of this paper is to assess the perception of social networking sites among business school students in India and analyze its role in enhancing quality of student experiences in a business school leading to the proposal of an agenda for future research. In this study, more than 300 students of a reputed business school were involved in a survey of their preferences of different social networking sites and their perceptions and attitudes towards these sites. A questionnaire with three major sections was designed, validated and distributed among  a sample of students, the research method being descriptive in nature. Crucial questions were addressed to the students concerning time commitment, reasons for usage, nature of interaction on these sites, and the propensity to share information leading to direct and indirect modes of learning. It was further supplemented with focus group discussion to analyze the findings. The paper notes the resistance in the adoption of new technology by a section of business school faculty, who are staunch supporters of the classical “face-to-face” instruction. In conclusion, social networking sites like Facebook and LinkedIn provide new avenues for students to express themselves and to interact with one another. Universities could take advantage of the new ways  in which students are communicating with one another. Although interactive educational options such as Moodle exist, social networking sites are rarely used for academic purposes. Using this medium opens new ways of academically-oriented interactions where faculty could discover more about students' interests, and students, in turn, might express and develop more intellectual facets of their lives. hitherto unknown intellectual facets.  This study also throws up the enormous potential of mobile phones as a tool for “blended learning” in business schools going forward.

Keywords: business school, India, learning, social media, social networking, university

Procedia PDF Downloads 239
239 Quantum Information Scrambling and Quantum Chaos in Silicon-Based Fermi-Hubbard Quantum Dot Arrays

Authors: Nikolaos Petropoulos, Elena Blokhina, Andrii Sokolov, Andrii Semenov, Panagiotis Giounanlis, Xutong Wu, Dmytro Mishagli, Eugene Koskin, Robert Bogdan Staszewski, Dirk Leipold

Abstract:

We investigate entanglement and quantum information scrambling (QIS) by the example of a many-body Extended and spinless effective Fermi-Hubbard Model (EFHM and e-FHM, respectively) that describes a special type of quantum dot array provided by Equal1 labs silicon-based quantum computer. The concept of QIS is used in the framework of quantum information processing by quantum circuits and quantum channels. In general, QIS is manifest as the de-localization of quantum information over the entire quantum system; more compactly, information about the input cannot be obtained by local measurements of the output of the quantum system. In our work, we will first make an introduction to the concept of quantum information scrambling and its connection with the 4-point out-of-time-order (OTO) correlators. In order to have a quantitative measure of QIS we use the tripartite mutual information, in similar lines to previous works, that measures the mutual information between 4 different spacetime partitions of the system and study the Transverse Field Ising (TFI) model; this is used to quantify the dynamical spreading of quantum entanglement and information in the system. Then, we investigate scrambling in the quantum many-body Extended Hubbard Model with external magnetic field Bz and spin-spin coupling J for both uniform and thermal quantum channel inputs and show that it scrambles for specific external tuning parameters (e.g., tunneling amplitudes, on-site potentials, magnetic field). In addition, we compare different Hilbert space sizes (different number of qubits) and show the qualitative and quantitative differences in quantum scrambling as we increase the number of quantum degrees of freedom in the system. Moreover, we find a "scrambling phase transition" for a threshold temperature in the thermal case, that is, the temperature of the model that the channel starts to scramble quantum information. Finally, we make comparisons to the TFI model and highlight the key physical differences between the two systems and mention some future directions of research.

Keywords: condensed matter physics, quantum computing, quantum information theory, quantum physics

Procedia PDF Downloads 68
238 Desing of Woven Fabric with Increased Sound Transmission Loss Property

Authors: U. Gunal, H. I. Turgut, H. Gurler, S. Kaya

Abstract:

There are many ever-increasing and newly emerging problems with rapid population growth in the world. With the increase in people's quality of life in our daily life, acoustic comfort has become an important feature in the textile industry. In order to meet all these expectations in people's comfort areas and survive in challenging competitive conditions in the market without compromising the customer product quality expectations of textile manufacturers, it has become a necessity to bring functionality to the products. It is inevitable to research and develop materials and processes that will bring these functionalities to textile products. The noise we encounter almost everywhere in our daily life, in the street, at home and work, is one of the problems which textile industry is working on. It brings with it many health problems, both mentally and physically. Therefore, noise control studies become more of an issue. Besides, materials used in noise control are not sufficient to reduce the effect of the noise level. The fabrics used in acoustic studies in the textile industry do not show sufficient performance according to their weight and high cost. Thus, acoustic textile products can not be used in daily life. In the thesis study, the attributions used in the noise control and building acoustics studies in the literature were analyzed, and the product with the highest damping value that a textile material will have was designed, manufactured, and tested. Optimum values were obtained by using different material samples that may affect the performance of the acoustic material. Acoustic measurement methods should be applied to verify the acoustic performances shown by the parameters and the designed three-dimensional structure at different values. In the measurements made in the study, the device designed for determining the acoustic performance of the material for both the impedance tube according to the relevant standards and the different noise types in the study was used. In addition, sound records of noise types encountered in daily life are taken and applied to the acoustic absorbent fabric with the aid of the device, and the feasibility of the results and the commercial ability of the product are examined. MATLAB numerical computing programming language and libraries were used in the frequency and sound power analyses made in the study.

Keywords: acoustic, egg crate, fabric, textile

Procedia PDF Downloads 85
237 Scoping Review of the Potential to Embed Mental Health Impact in Global Challenges Research

Authors: Netalie Shloim, Brian Brown, Siobhan Hugh-Jones, Jane Plastow, Diana Setiyawati, Anna Madill

Abstract:

In June 2021, the World Health Organization launched its guidance and technical packages on community mental health services, stressing a human rights-based approach to care. This initiative stems from an increasing acknowledgment of the role mental health plays in achieving the Sustainable Development Goals. Nevertheless, mental health remains a relatively neglected research area and the estimates for untreated mental disorders in low-and-middle-income countries (LMICs) are as high as 78% for adults. Moreover, the development sector and research programs too often side-line mental health as a privilege in the face of often immediate threats to life and livelihood. As a way of addressing this problem, this study aimed to examine past or ongoing GCRF projects to see if there were opportunities where mental health impact could have been achieved without compromising a study's main aim and without overburdening a project. Projects funded by the UKRI Global Challenges Research Fund (GCRF) were analyzed. This program was initiated in 2015 to support cutting-edge research that addresses the challenges faced by developing countries. By the end of May 2020, a total of 15,279 projects were funded of which only 3% had an explicit mental health focus. A sample of 36 non-mental-health-focused projects was then sampled for diversity across research council, challenge portfolio and world region. Each of these 36 projects was coded by two coders for opportunities to embed mental health impact. To facilitate coding, the literature was inspected for dimensions relevant to LMIC settings. Three main psychological and three main social dimensions were identified: promote a positive sense of self; promote positive emotions, safe expression and regulation of challenging emotions, coping strategies, and help-seeking; facilitate skills development; and facilitate community-building; preserve sociocultural identity; support community mobilization. Coding agreement was strong on missed opportunities for mental health impact on the three social dimensions: support community mobilization (92%), facilitate community building (83%), preserve socio-cultural identity (70%). Coding agreement was reasonably strong on missed opportunities for mental health impact on the three psychological dimensions: promote positive emotions (67%), facilitate skills development (61%), positive sense of self (58%). In order of frequency, the agreed perceived opportunities from the highest to lowest are: support community mobilization, facilitate community building, facilitate skills development, promote a positive sense of self, promote positive emotions, preserve sociocultural identity. All projects were considered to have an opportunity to support community mobilization and to facilitate skills development by at least one coder. Findings provided support that there were opportunities to embed mental health impact in research across the range of development sectors and identifies what kind of missed opportunities are most frequent. Hence, mainstreaming mental health has huge potential to tackle the lack of priority and funding it has attracted traditionally. The next steps are to understand the barriers to mainstreaming mental health and to work together to overcome them.

Keywords: GCRF, mental health, psychosocial wellbeing, LMIC

Procedia PDF Downloads 150
236 A Method and System for Secure Authentication Using One Time QR Code

Authors: Divyans Mahansaria

Abstract:

User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.

Keywords: authentication, QR code, cipher / decipher text, one time password, secret information

Procedia PDF Downloads 247
235 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria

Authors: O. O. Aiyelokun, O. A. Agbede

Abstract:

Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.

Keywords: boundary condition, goodness of fit, groundwater, satellite-based data

Procedia PDF Downloads 99
234 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver

Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera

Abstract:

In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.

Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids

Procedia PDF Downloads 391
233 Colloids and Heavy Metals in Groundwaters: Tangential Flow Filtration Method for Study of Metal Distribution on Different Sizes of Colloids

Authors: Jiancheng Zheng

Abstract:

When metals are released into water from mining activities, they undergo changes chemically, physically and biologically and then may become more mobile and transportable along the waterway from their original sites. Natural colloids, including both organic and inorganic entities, are naturally occurring in any aquatic environment with sizes in the nanometer range. Natural colloids in a water system play an important role, quite often a key role, in binding and transporting compounds. When assessing and evaluating metals in natural waters, their sources, mobility, fate, and distribution patterns in the system are the major concerns from the point of view of assessing environmental contamination and pollution during resource development. There are a few ways to quantify colloids and accordingly study how metals distribute on different sizes of colloids. Current research results show that the presence of colloids can enhance the transport of some heavy metals in water, while heavy metals may also have an influence on the transport of colloids when cations in the water system change colloids and/or the ion strength of the water system changes. Therefore, studies into the relationship between different sizes of colloids and different metals in a water system are necessary and needed as natural colloids in water systems are complex mixtures of both organic and inorganic as well as biological materials. Their stability could be sensitive to changes in their shapes, phases, hardness and functionalities due to coagulation and deposition et al. and chemical, physical, and biological reactions. Because metal contaminants’ adsorption on surfaces of colloids is closely related to colloid properties, it is desired to fraction water samples as soon as possible after a sample is taken in the natural environment in order to avoid changes to water samples during transportation and storage. For this reason, this study carried out groundwater sample processing in the field, using Prep/Scale tangential flow filtration systems with 3-level cartridges (1 kDa, 10 kDa and 100 kDa). Groundwater samples from seven sites at Fort MacMurray, Alberta, Canada, were fractionated during the 2015 field sampling season. All samples were processed within 3 hours after samples were taken. Preliminary results show that although the distribution pattern of metals on colloids may vary with different samples taken from different sites, some elements often tend to larger colloids (such as Fe and Re), some to finer colloids (such as Sb and Zn), while some of them mainly in the dissolved form (such as Mo and Be). This information is useful to evaluate and project the fate and mobility of different metals in the groundwaters and possibly in environmental water systems.

Keywords: metal, colloid, groundwater, mobility, fractionation, sorption

Procedia PDF Downloads 321
232 Raising the Property Provisions of the Topographic Located near the Locality of Gircov, Romania

Authors: Carmen Georgeta Dumitrache

Abstract:

Measurements of terrestrial science aims to study the totality of operations and computing, which are carried out for the purposes of representation on the plan or map of the land surface in a specific cartographic projection and topographic scale. With the development of society, the metrics have evolved, and they land, being dependent on the achievement of a goal-bound utility of economic activity and of a scientific purpose related to determining the form and dimensions of the Earth. For measurements in the field, data processing and proper representation on drawings and maps of planimetry and landform of the land, using topographic and geodesic instruments, calculation and graphical reporting, which requires a knowledge of theoretical and practical concepts from different areas of science and technology. In order to use properly in practice, topographical and geodetic instruments designed to measure precise angles and distances are required knowledge of geometric optics, precision mechanics, the strength of materials, and more. For processing, the results from field measurements are necessary for calculation methods, based on notions of geometry, trigonometry, algebra, mathematical analysis and computer science. To be able to illustrate topographic measurements was established for the lifting of property located near the locality of Gircov, Romania. We determine this total surface of the plan (T30), parcel/plot, but also in the field trace the coordinates of a parcel. The purpose of the removal of the planimetric consisted of: the exact determination of the bounding surface; analytical calculation of the surface; comparing the surface determined with the one registered in the documents produced; drawing up a plan of location and delineation with closeness and distance contour, as well as highlighting the parcels comprising this property; drawing up a plan of location and delineation with closeness and distance contour for a parcel from Dave; in the field trace outline of plot points from the previous point. The ultimate goal of this work was to determine and represent the surface, but also to tear off a plot of the surface total, while respecting the first surface condition imposed by the Act of the beneficiary's property.

Keywords: topography, surface, coordinate, modeling

Procedia PDF Downloads 231
231 Development of a Sprayable Piezoelectric Material for E-Textile Applications

Authors: K. Yang, Y. Wei, M. Zhang, S. Yong, R. Torah, J. Tudor, S. Beeby

Abstract:

E-textiles are traditional textiles with integrated electronic functionality. It is an emerging innovation with numerous applications in fashion, wearable computing, health and safety monitoring, and the military and medical sectors. The piezoelectric effect is a widespread and versatile transduction mechanism used in sensor and actuator applications. Piezoelectric materials produce electric charge when stressed. Conversely, mechanical deformation occurs when an electric field is applied across the material. Lead Zirconate Titanate (PZT) is a widely used piezoceramic material which has been used to fabricate e-textiles through screen printing, electro spinning and hydrothermal synthesis. This paper explores an alternative fabrication process: Spray coating. Spray coating is a straightforward and cost effective fabrication method applicable on both flat and curved surfaces. It can also be applied selectively by spraying through a stencil which enables the required design to be realised on the substrate. This work developed a sprayable PZT based piezoelectric ink consisting of a binder (Fabink-Binder-01), PZT powder (80 % 2 µm and 20 % 0.8 µm) and acetone as a thinner. The optimised weight ratio of PZT/binder is 10:1. The components were mixed using a SpeedMixer DAC 150. The fabrication processes is as follows: 1) Screen print a UV-curable polyurethane interface layer on the textile to create a smooth textile surface. 2) Spray one layer of a conductive silver polymer ink through a pre-designed stencil and dry at 90 °C for 10 minutes to form the bottom electrode. 3) Spray three layers of the PZT ink through a pre-designed stencil and dry at 90 °C for 10 minutes for each layer to form a total thickness of ~250µm PZT layer. 4) Spray one layer of the silver ink through a pre-designed stencil on top of the PZT layer and dry at 90 °C for 10 minutes to form the top electrode. The domains of the PZT elements were aligned by polarising the material at an elevated temperature under a strong electric field. A d33 of 37 pC/N has been achieved after polarising at 90 °C for 6 minutes with an electric field of 3 MV/m. The application of the piezoelectric textile was demonstrated by fabricating a pressure sensor to switch an LED on/off. Other potential applications on e-textiles include motion sensing, energy harvesting, force sensing and a buzzer.

Keywords: piezoelectric, PZT, spray coating, pressure sensor, e-textile

Procedia PDF Downloads 444
230 Human Factors as the Main Reason of the Accident in Scaffold Use Assessment

Authors: Krzysztof J. Czarnocki, E. Czarnocka, K. Szaniawska

Abstract:

Main goal of the research project is Scaffold Use Risk Assessment Model (SURAM) formulation, developed for the assessment of risk levels as a various construction process stages with various work trades. Finally, in 2016, the project received financing by the National Center for Research and development according to PBS3/A2/19/2015–Research Grant. The presented data, calculations and analyzes discussed in this paper were created as a result of the completion on the first and second phase of the PBS3/A2/19/2015 project. Method: One of the arms of the research project is the assessment of worker visual concentration on the sight zones as well as risky visual point inadequate observation. In this part of research, the mobile eye-tracker was used to monitor the worker observation zones. SMI Eye Tracking Glasses is a tool, which allows us to analyze in real time and place where our eyesight is concentrated on and consequently build the map of worker's eyesight concentration during a shift. While the project is still running, currently 64 construction sites have been examined, and more than 600 workers took part in the experiment including monitoring of typical parameters of the work regimen, workload, microclimate, sound vibration, etc. Full equipment can also be useful in more advanced analyses. Because of that technology we have verified not only main focus of workers eyes during work on or next to scaffolding, but we have also examined which changes in the surrounding environment during their shift influenced their concentration. In the result of this study it has been proven that only up to 45.75% of the shift time, workers’ eye concentration was on one of three work-related areas. Workers seem to be distracted by noisy vehicles or people nearby. In opposite to our initial assumptions and other authors’ findings, we observed that the reflective parts of the scaffoldings were not more recognized by workers in their direct workplaces. We have noticed that the red curbs were the only well recognized part on a very few scaffoldings. Surprisingly on numbers of samples, we have not recognized any significant number of concentrations on those curbs. Conclusion: We have found the eye-tracking method useful for the construction of the SURAM model in the risk perception and worker’s behavior sub-modules. We also have found that the initial worker's stress and work visual conditions seem to be more predictive for assessment of the risky developing situation or an accident than other parameters relating to a work environment.

Keywords: accident assessment model, eye tracking, occupational safety, scaffolding

Procedia PDF Downloads 177
229 Guiding Urban Development in a Traditional Neighbourhood: Case Application of Kolkata

Authors: Nabamita Nath, Sanghamitra Sarkar

Abstract:

Urban development in traditional neighbourhoods of cities is undergoing a sea change due to imposition of irregular development patterns on a predominantly inclusive urban fabric. In recent times, traditional neighbourhoods of Kolkata have experienced irregular urban development which has resulted in transformation of its immediate urban character. The goal is to study and analyse impact of new urban developments within traditional neighbourhoods of Kolkata and establish development guidelines to balance the old with the new. Various cities predominantly in third world countries are also experiencing similar development patterns in their traditional neighbourhoods. Existing literature surveys of development patterns in such neighbourhoods have established 9 major parameters viz. edge, movement, node, landmark, size-density, pattern-grain-texture, open spaces, urban spaces, urban form and views-vistas of the neighbourhood. To evaluate impact of urban development in traditional neighbourhoods of Kolkata, 3 different areas have been chronologically selected based on their settlement patterns. Parameters established through literature surveys have been applied to the selected areas to study and analyse the existing patterns of development. The main sources of this study included extensive on-site surveys, academic archive, census data, organisational records and informational websites. Applying the established parameters, 5 major conclusions were derived. Firstly, it was found that pedestrian friendly neighbourhoods of the city were becoming more car-centric. This has resulted in loss of interactive and social spaces which defined the cultural heritage of Kolkata. Secondly, the urban pattern which was composed of dense and compact fabric is gradually losing its character due to incorporation of new building typologies. Thirdly, the new building typologies include gated communities with private open spaces which is a stark departure from the existing built typology. However, these open spaces have not contributed in creation of inclusive public places for the community which are a significant part of such heritage neighbourhood precincts. Fourthly, commercial zones that primarily developed along major access routes have now infiltrated within these neighbourhoods. Gated communities do not favour formation of on-street commercial activities generating haphazard development patterns. Lastly, individual residential buildings that reflected Indo-saracenic and Neo-gothic architectural styles are converting into multi-storeyed residential apartments. As a result, the axis that created a definite visual identity for a neighbourhood is progressively following an irregular pattern. Thus, uniformity of the old skyline is gradually becoming inconsistent. The major issue currently is threat caused by irregular urban development to heritage zones and buildings of traditional neighbourhoods. Streets, lanes, courtyards, open spaces and buildings of old neighbourhoods imparted a unique cultural identity to the city that is disappearing with emerging urban development patterns. It has been concluded that specific guidelines for urban development should be regulated primarily based on existing urban form of traditional neighbourhoods. Such neighbourhood development strategies should be formulated for various cities of third world countries to control irregular developments thereby balancing heritage and development.

Keywords: heritage, Kolkata, traditional neighbourhood, urban development

Procedia PDF Downloads 153
228 Fast Transient Workflow for External Automotive Aerodynamic Simulations

Authors: Christina Peristeri, Tobias Berg, Domenico Caridi, Paul Hutcheson, Robert Winstanley

Abstract:

In recent years the demand for rapid innovations in the automotive industry has led to the need for accelerated simulation procedures while retaining a detailed representation of the simulated phenomena. The project’s aim is to create a fast transient workflow for external aerodynamic CFD simulations of road vehicles. The geometry used was the SAE Notchback Closed Cooling DrivAer model, and the simulation results were compared with data from wind tunnel tests. The meshes generated for this study were of two types. One was a mix of polyhedral cells near the surface and hexahedral cells away from the surface. The other was an octree hex mesh with a rapid method of fitting to the surface. Three different grid refinement levels were used for each mesh type, with the biggest total cell count for the octree mesh being close to 1 billion. A series of steady-state solutions were obtained on three different grid levels using a pseudo-transient coupled solver and a k-omega-based RANS turbulence model. A mesh-independent solution was found in all cases with a medium level of refinement with 200 million cells. Stress-Blended Eddy Simulation (SBES) was chosen for the transient simulations, which uses a shielding function to explicitly switch between RANS and LES mode. A converged pseudo-transient steady-state solution was used to initialize the transient SBES run that was set up with the SIMPLEC pressure-velocity coupling scheme to reach the fastest solution (on both CPU & GPU solvers). An important part of this project was the use of FLUENT’s Multi-GPU solver. Tesla A100 GPU has been shown to be 8x faster than an Intel 48-core Sky Lake CPU system, leading to significant simulation speed-up compared to the traditional CPU solver. The current study used 4 Tesla A100 GPUs and 192 CPU cores. The combination of rapid octree meshing and GPU computing shows significant promise in reducing time and hardware costs for industrial strength aerodynamic simulations.

Keywords: CFD, DrivAer, LES, Multi-GPU solver, octree mesh, RANS

Procedia PDF Downloads 91
227 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition

Authors: Karolina Mazur, Stanislaw Kuciel

Abstract:

Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.

Keywords: biocomposites, PLA, PHA, storage

Procedia PDF Downloads 242