Search results for: fast charging
263 Production of Recombinant Human Serum Albumin in Escherichia coli: A Crucial Biomolecule for Biotechnological and Healthcare Applications
Authors: Ashima Sharma, Tapan K. Chaudhuri
Abstract:
Human Serum Albumin (HSA) is one of the most demanded therapeutic protein with immense biotechnological applications. The current source of HSA is human blood plasma. Blood is a limited and an unsafe source as it possesses the risk of contamination by various blood derived pathogens. This issue led to exploitation of various hosts with the aim to obtain an alternative source for the production of the rHSA. But, till now no host has been proven to be effective commercially for rHSA production because of their respective limitations. Thus, there exists an indispensable need to promote non-animal derived rHSA production. Of all the host systems, Escherichia coli is one of the most convenient hosts which has contributed in the production of more than 30% of the FDA approved recombinant pharmaceuticals. E. coli grows rapidly and its culture reaches high cell density using inexpensive and simple substrates. The fermentation batch turnaround number for E. coli culture is 300 per year, which is far greater than any of the host systems available. Therefore, E. coli derived recombinant products have more economical potential as fermentation processes are cheaper compared to the other expression hosts available. Despite of all the mentioned advantages, E. coli had not been successfully adopted as a host for rHSA production. The major bottleneck in exploiting E. coli as a host for rHSA production was aggregation i.e. majority of the expressed recombinant protein was forming inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA form inclusion body is not preferred because it is tedious, time consuming, laborious and expensive. Because of this limitation, E. coli host system was neglected for rHSA production for last few decades. Considering the advantages of E. coli as a host, the present work has targeted E. coli as an alternate host for rHSA production through resolving the major issue of inclusion body formation associated with it. In the present study, we have developed a novel and innovative method for enhanced soluble and functional production of rHSA in E.coli (~60% of the total expressed rHSA in the soluble fraction) through modulation of the cellular growth, folding and environmental parameters, thereby leading to significantly improved and enhanced -expression levels as well as the functional and soluble proportion of the total expressed rHSA in the cytosolic fraction of the host. Therefore, in the present case we have filled in the gap in the literature, by exploiting the most well studied host system Escherichia coli which is of low cost, fast growing, scalable and ‘yet neglected’, for the enhancement of functional production of HSA- one of the most crucial biomolecule for clinical and biotechnological applications.Keywords: enhanced functional production of rHSA in E. coli, recombinant human serum albumin, recombinant protein expression, recombinant protein processing
Procedia PDF Downloads 347262 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens
Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott
Abstract:
In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF
Procedia PDF Downloads 172261 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example
Authors: Wang Yang
Abstract:
Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map
Procedia PDF Downloads 104260 Criticality of Adiabatic Length for a Single Branch Pulsating Heat Pipe
Authors: Utsav Bhardwaj, Shyama Prasad Das
Abstract:
To meet the extensive requirements of thermal management of the circuit card assemblies (CCAs), satellites, PCBs, microprocessors, any other electronic circuitry, pulsating heat pipes (PHPs) have emerged in the recent past as one of the best solutions technically. But industrial application of PHPs is still unexplored up to a large extent due to their poor reliability. There are several systems as well as operational parameters which not only affect the performance of an operating PHP, but also decide whether the PHP can operate sustainably or not. Functioning may completely be halted for some particular combinations of the values of system and operational parameters. Among the system parameters, adiabatic length is one of the important ones. In the present work, a simplest single branch PHP system with an adiabatic section has been considered. It is assumed to have only one vapour bubble and one liquid plug. First, the system has been mathematically modeled using film evaporation/condensation model, followed by the steps of recognition of equilibrium zone, non-dimensionalization and linearization. Then proceeding with a periodical solution of the linearized and reduced differential equations, stability analysis has been performed. Slow and fast variables have been identified, and averaging approach has been used for the slow ones. Ultimately, temporal evolution of the PHP is predicted by numerically solving the averaged equations, to know whether the oscillations are likely to sustain/decay temporally. Stability threshold has also been determined in terms of some non-dimensional numbers formed by different groupings of system and operational parameters. A combined analytical and numerical approach has been used, and it has been found that for each combination of all other parameters, there exists a maximum length of the adiabatic section beyond which the PHP cannot function at all. This length has been called as “Critical Adiabatic Length (L_ac)”. For adiabatic lengths greater than “L_ac”, oscillations are found to be always decaying sooner or later. Dependence of “L_ac” on some other parameters has also been checked and correlated at certain evaporator & condenser section temperatures. “L_ac” has been found to be linearly increasing with increase in evaporator section length (L_e), whereas the condenser section length (L_c) has been found to have almost no effect on it upto a certain limit. But at considerably large condenser section lengths, “L_ac” is expected to decrease with increase in “L_c” due to increased wall friction. Rise in static pressure (p_r) exerted by the working fluid reservoir makes “L_ac” rise exponentially whereas it increases cubically with increase in the inner diameter (d) of PHP. Physics of all such variations has been given a good insight too. Thus, a methodology for quantification of the critical adiabatic length for any possible set of all other parameters of PHP has been established.Keywords: critical adiabatic length, evaporation/condensation, pulsating heat pipe (PHP), thermal management
Procedia PDF Downloads 227259 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model
Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi
Abstract:
Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.Keywords: flight control clearance, LFR, stability analysis, robustness analysis
Procedia PDF Downloads 352258 Using the Micro Computed Tomography to Study the Corrosion Behavior of Magnesium Alloy at Different pH Values
Authors: Chia-Jung Chang, Sheng-Che Chen, Ming-Long Yeh, Chih-Wei Wang, Chih-Han Chang
Abstract:
Introduction and Motivation: In recent years, magnesium alloy is used to be a kind of medical biodegradable materials. Magnesium is an essential element in the body and is efficiently excreted by the kidneys. Furthermore, the mechanical properties of magnesium alloy is closest to human bone. However, in some cases magnesium alloy corrodes so quickly that it would release hydrogen on surface of implant. The other product is hydroxide ion, it can significantly increase the local pH value. The above situations may have adverse effects on local cell functions. On the other hand, nowadays magnesium alloy corrode too fast to maintain the function of implant until the healing of tissue. Therefore, much recent research about magnesium alloy has focused on controlling the corrosion rate. The in vitro corrosion behavior of magnesium alloys is affected by many factors, and pH value is one of factors. In this study, we will study on the influence of pH value on the corrosion behavior of magnesium alloy by the Micro-CT (micro computed tomography) and other instruments.Material and methods: In the first step, we make some guiding plates for specimens of magnesium alloy AZ91 by Rapid Prototyping. The guiding plates are able to be a standard for the degradation of specimen, so that we can use it to make sure the position of specimens in the CT image. We can also simplify the conditions of degradation by the guiding plates.In the next step, we prepare the solution with different pH value. And then we put the specimens into the solution to start the corrosion test. The CT image, surface photographs and weigh are measured on every twelve hours. Results: In the primary results of the test, we make sure that CT image can be a way to quantify the corrosion behavior of magnesium alloy. Moreover we can observe the phenomenon that corrosion always start from some erosion point. It’s possibly based on some defect like dislocations and the voids with high strain energy in the materials. We will deal with the raw data into Mass Loss (ML) and corrosion rate by CT image, surface photographs and weigh in the near future. Having a simple prediction, the pH value and degradation rate will be negatively correlated. And we want to find out the equation of the pH value and corrosion rate. We also have a simple test to simulate the change of the pH value in the local region. In this test the pH value will rise to 10 in a short time. Conclusion: As a biodegradable implant for the area with stagnating body fluid flow in the human body, magnesium alloy can cause the increase of local pH values and release the hydrogen. Those may damage the human cell. The purpose of this study is finding out the equation of the pH value and corrosion rate. After that we will try to find the ways to overcome the limitations of medical magnesium alloy.Keywords: magnesium alloy, biodegradable materials, corrosion, micro-CT
Procedia PDF Downloads 457257 An Experimental Study on Greywater Reuse for Irrigating a Green Wall System
Authors: Mishadi Herath, Amin Talei, Andreas Hermawan, Clarina Chua
Abstract:
Green walls are vegetated structures on building’s wall that are considered as part of sustainable urban design. They are proved to have many micro-climate benefits such as reduction in indoor temperature, noise attenuation, and improvement in air quality. On the other hand, several studies have also been conducted on potential reuse of greywater in urban water management. Greywater is relatively clean when compared to blackwater; therefore, this study was aimed to assess the potential reuse of it for irrigating a green wall system. In this study, the campus of Monash University Malaysia located in Selangor state was considered as the study site where total 48 samples of greywater were collected from 7 toilets hand-wash and 5 pantries during 3 months period. The samples were tested to characterize the quality of greywater in the study site and compare it with local standard for irrigation water. PH and concentration of heavy metals, nutrients, Total Suspended Solids (TSS), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), total Coliform and E.coli were measured. Results showed that greywater could be directly used for irrigation with minimal treatment. Since the effluent of the system was supposed to be drained to stormwater drainage system, the effluent needed to meet certain quality requirement. Therefore, a biofiltration system was proposed to host the green wall plants and also treat the greywater (which is used as irrigation water) to the required level. To assess the performance of the proposed system, an experimental setup consisting of Polyvinyl Chloride (PVC) soil columns with sand-based filter media were prepared. Two different local creeper plants were chosen considering several factors including fast growth, low maintenance requirement, and aesthetic aspects. Three replicates of each plants were used to ensure the validity of the findings. The growth of creeping plants and their survivability was monitored for 6 months while monthly sampling and testing of effluent was conducted to evaluate effluent quality. An analysis was also conducted to estimate the potential cost and benefit of such system considering water and energy saving in the system. Results showed that the proposed system can work efficiently throughout a long period of time with minimal maintenance requirement. Moreover, the biofiltration-green wall system was found to be successful in reusing greywater as irrigating water while the effluent was meeting all the requirements for being drained to stormwater drainage system.Keywords: biofiltration, green wall, greywater, sustainability
Procedia PDF Downloads 214256 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures
Procedia PDF Downloads 208255 Nuclear Materials and Nuclear Security in India: A Brief Overview
Authors: Debalina Ghoshal
Abstract:
Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.Keywords: India, nuclear security, nuclear materials, non proliferation
Procedia PDF Downloads 352254 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation
Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang
Abstract:
The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation
Procedia PDF Downloads 149253 Assessment of Physical Activity and Sun Exposure of Saudi Patients with Type 2 Diabetes Mellitus in Ramadan and Non-Ramadan Periods
Authors: Abdullah S. Alghamdi, Khaled Alghamdi, Richard O. Jenkins, Parvez I. Haris
Abstract:
Background: Physical activity is an important factor in the treatment and prevention of type 2 diabetes mellitus (T2DM). Reduction in HbA1c level, an important diabetes biomarker, was reported in patients who increased their daily physical activity. Although the ambient temperature was reported to be positively correlated to a negative impact on health and increase the incidences of diabetes, the exposure to bright sunlight was recently found to be associated with enhanced insulin sensitivity and improved beta-cell function. How Ramadan alters physical activity, and especially sunlight exposure, has not been adequately investigated. Aim: This study aimed to assess the physical activity and sun exposure of Saudis with T2DM over different periods (before, during, and after Ramadan) and related this to HbA1c levels. Methods: This study recruited 82 Saudis with T2DM, who chose to fast during Ramadan, from the Endocrine and Diabetic Centre of Al Iman General Hospital, Riyadh, Saudi Arabia. Ethical approvals for this study were obtained from De Montfort University and Saudi Ministry of Health. Physical activity and sun exposure were assessed by a self-administered questionnaire. Physical activity was estimated using the International Physical Activity Questionnaire (IPAQ), while the sun exposure was assessed by asking the patients about their hours per week of direct exposure to the sun, and daily hours spent outdoors. Blood samples were collected in each period for measuring HbA1c. Results: Low physical activity was observed in more than 60% of the patients, with no significant changes between periods. There were no significant variances between periods in the daily hours spent outdoors and the total number of weekly hours of direct exposure to the sun. The majority of patients reported only few hours of exposure to the sun (1h or less per week) and time spent outdoors (1h or less per day). The mean HbA1c significantly changed between periods (P = 0.001), with lowest level during Ramadan. There were significant differences in the mean HbA1c between the groups for the level of physical activity (P < 0.001), with significant lower mean HbA1c in the higher-level group. There were no significant variances in the mean of HbA1c between the groups for the daily hours spent outdoors. The mean HbA1c of the patients, who reported never in their total weekly hours of exposure to the sun, was significantly lower than the mean HbA1c of those who reported 1 hour or less (P = 0.001). Conclusion: Physical inactivity was prevalent among the study population with very little exposure to the sun or time spent outdoors. Higher level of physical activity was associated with lower mean HbA1c levels. Encouraging T2DM patients to achieve the recommended levels of physical activity may help them to obtain greater benefits of Ramadan fasting, such as reducing their HbA1c levels. The impact of low direct exposure to the sun and the time spent outdoors needs to be further investigated in both healthy and diabetic patients.Keywords: diabetes, fasting, physical activity, sunlight, Ramadan
Procedia PDF Downloads 160252 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 150251 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program
Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun
Abstract:
A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system
Procedia PDF Downloads 200250 Synthesis and Characterization of High-Aspect-Ratio Hematite Nanostructures for Solar Water Splitting
Authors: Paula Quiterio, Arlete Apolinario, Celia T. Sousa, Joao Azevedo, Paula Dias, Adelio Mendes, Joao P. Araujo
Abstract:
Nowadays one of the mankind's greatest challenges has been the supply of low-cost and environmentally friendly energy sources as an alternative to non-renewable fossil fuels. Hydrogen has been considered a promising solution, representing a clean and low-cost fuel. It can be produced directly from clean and abundant resources, such as sunlight and water, using photoelectrochemical cells (PECs), in a process that mimics the nature´s photosynthesis. Hematite (alpha-Fe2O3) has attracted considerable attention as a promising photoanode for solar water splitting, due to its high chemical stability, nontoxicity, availability and low band gap (2.2 eV), which allows reaching a high thermodynamic solar-to-hydrogen efficiency of 16.8 %. However, the main drawbacks of hematite such as the short hole diffusion length and the poor conductivity that lead to high electron-hole recombination result in significant PEC efficiency losses. One strategy to overcome these limitations and to increase the PEC efficiency is to use 1D nanostructures, such as nanotubes (NTs) and nanowires (NWs), which present high aspect ratios and large surface areas providing direct pathways for electron transport up to the charge collector and minimizing the recombination losses. In particular, due to the ultrathin walls of the NTs, the holes can reach the surface faster than in other nanostructures, representing a key factor for the NTs photoresponse. In this work, we prepared hematite NWs and NTs, respectively by hydrothermal process and electrochemical anodization. For hematite NWs growing, we studied the effect of variable hydrothermal conditions, different annealing temperatures and time, and the use of Ti and Sn dopants on the morphology and PEC performance. The crystalline phase characterization by X-ray diffraction was crucial to distinguish the formation of hematite and other iron oxide phases, alongside its effect on the photoanodes conductivity and consequent PEC efficiency. The conductivity of the as-prepared NWs is very low, in the order of 10-5 S cm-1, but after doping and annealing optimization it increased by a factor of 105. A high photocurrent density of 1.02 mA cm-2 at 1.45 VRHE was obtained under simulated sunlight, which is a very promising value for this kind of hematite nanostructures. The stability of the photoelectrodes was also tested, presenting good stability after several J-V measurements over time. The NTs, synthesized by fast anodizations with potentials ranging from 20-100 V, presented a linear growth of the NTs pore walls, with very low thicknesses from 10 - 18 nm. These preliminary results are also very promising for the use of hematite photoelectrodes on PEC hydrogen applications.Keywords: hematite, nanotubes, nanowires, photoelectrochemical cells
Procedia PDF Downloads 229249 Socio-Political Crisis in the North West and South West Regions of Cameroon and the Emergence of New Cultures
Authors: Doreen Mekunda
Abstract:
This paper is built on the premise that the current socio-political crisis in the two restive regions of Cameroon, though enveloped with destructive and devastating trends (effects) on both property and human lives, is not without its strengths and merits. It is incontestable that many cultures, to a greater extent, are going to be destroyed as people forcibly move from war-stricken habitats to non-violent places. Many cultural potentials, traditional shrines, artifacts, art, and crafts, etc., are unknowingly or knowingly disfigured, and many other ugly things will, by the end of the crisis, affect the cultures of these two regions under siege and of the receiving population. A plethora of other problems like the persecution of Internally Displaced Persons (IDPs) for being displaced and blamed for increased crime rates and the existence of cultural and ethnic differences that produce both inter-tribal and interpersonal conflicts and conflicts between communities will abound. However, there is the emergence of rapid literature, and other forms of cultural productions, whether written or oral, is visible, thereby precipitating a rich cultural diversity due to the coming together of a variety of cultures of both the IDPs and the receiving populations, rapid urbanization, improvement of health-related issues, the rebirth of indigenous cultural practices, the development of social and lingua-cultural competences, dependence on alternative religions, faith and spirituality. Even financial and economic dependence, though a burden to others by IDPs, has its own merits as it improves the living standards of the IDPs. To be able to obtain plausible results, cultural materialism, which is a literary theory that hinges on the empirical study of socio-cultural systems within a materialist infrastructure-super-structure framework, is employed together with the postcolonial theory. Postcolonial theory because the study deals with postcolonial experiences/tenets of migration, hybridity, ethnicity, indignity, language, double consciousness, migration, center/margin binaries, and identity, amongst others. The study reveals that the involuntary movement of persons from their habitual homes brings about movement in cultures, thus, the emergence of new cultures. The movement of people who hold fast to their cultural heritage can only influence new forms of literature, the development of new communication competences, the rise of alternative religion, faith and spirituality, the re-emergence of customary and traditional legal systems that might have been abandoned for the new judicial systems, and above all the revitalization of traditional health care systems.Keywords: alternative religion, emergence, socio-political crisis, spirituality, lingua-cultural competences
Procedia PDF Downloads 178248 Promoting 21st Century Skills through Telecollaborative Learning
Authors: Saliha Ozcan
Abstract:
Technology has become an integral part of our lives, aiding individuals in accessing higher order competencies, such as global awareness, creativity, collaborative problem solving, and self-directed learning. Students need to acquire these competencies, often referred to as 21st century skills, in order to adapt to a fast changing world. Today, an ever-increasing number of schools are exploring how engagement through telecollaboration can support language learning and promote 21st century skill development in classrooms. However, little is known regarding how telecollaboration may influence the way students acquire 21st century skills. In this paper, we aim to shed light to the potential implications of telecollaborative practices in acquisition of 21st century skills. In our context, telecollaboration, which might be carried out in a variety of settings both synchronously or asynchronously, is considered as the process of communicating and working together with other people or groups from different locations through online digital tools or offline activities to co-produce a desired work output. The study presented here will describe and analyse the implementation of a telecollaborative project between two high school classes, one in Spain and the other in Sweden. The students in these classes were asked to carry out some joint activities, including creating an online platform, aimed at raising awareness of the situation of the Syrian refugees. We conduct a qualitative study in order to explore how language, culture, communication, and technology merge into the co-construction of knowledge, as well as supporting the attainment of the 21st century skills needed for network-mediated communication. To this end, we collected a significant amount of audio-visual data, including video recordings of classroom interaction and external Skype meetings. By analysing this data, we verify whether the initial pedagogical design and intended objectives of the telecollaborative project coincide with what emerges from the actual implementation of the tasks. Our findings indicate that, as well as planned activities, unplanned classroom interactions may lead to acquisition of certain 21st century skills, such as collaborative problem solving and self-directed learning. This work is part of a wider project (KONECT, EDU2013-43932-P; Spanish Ministry of Economy and Finance), which aims to explore innovative, cross-competency based teaching that can address the current gaps between today’s educational practices and the needs of informed citizens in tomorrow’s interconnected, globalised world.Keywords: 21st century skills, telecollaboration, language learning, network mediated communication
Procedia PDF Downloads 125247 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring
Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra
Abstract:
Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application
Procedia PDF Downloads 100246 First-Trimester Screening of Preeclampsia in a Routine Care
Authors: Tamar Grdzelishvili, Zaza Sinauridze
Abstract:
Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein
Procedia PDF Downloads 77245 Industrial Hemp Agronomy and Fibre Value Chain in Pakistan: Current Progress, Challenges, and Prospects
Authors: Saddam Hussain, Ghadeer Mohsen Albadrani
Abstract:
Pakistan is one of the most vulnerable countries to climate change. Being a country where 23% of the country’s GDP relies on agriculture, this is a serious cause of concern. Introducing industrial hemp in Pakistan can help build climate resilience in the agricultural sector of the country, as hemp has recently emerged as a sustainable, eco-friendly, resource-efficient, and climate-resilient crop globally. Hemp has the potential to absorb huge amounts of CO₂, nourish the soil, and be used to create various biodegradable and eco-friendly products. Hemp is twice as effective as trees at absorbing and locking up carbon, with 1 hectare (2.5 acres) of hemp reckoned to absorb 8 to 22 tonnes of CO₂ a year, more than any woodland. Along with its high carbon-sequestration ability, it produces higher biomass and can be successfully grown as a cover crop. Hemp can grow in almost all soil conditions and does not require pesticides. It has fast-growing qualities and needs only 120 days to be ready for harvest. Compared with cotton, hemp requires 50% less water to grow and can produce three times higher fiber yield with a lower ecological footprint. Recently, the Government of Pakistan has allowed the cultivation of industrial hemp for industrial and medicinal purposes, making it possible for hemp to be reinserted into the country’s economy. Pakistan’s agro-climatic and edaphic conditions are well-suitable to produce industrial hemp, and its cultivation can bring economic benefits to the country. Pakistan can enter global markets as a new exporter of hemp products. The production of hemp in Pakistan can be most exciting to the workforce, especially for farmers participating in hemp markets. The minimum production cost of hemp makes it affordable to small holding farmers, especially those who need their cropping system to be as highly sustainable as possible. Dr. Saddam Hussain is leading the first pilot project of Industrial Hemp in Pakistan. In the past three years, he has been able to recruit high-impact research grants on industrial hemp as Principal Investigator. He has already screened the non-toxic hemp genotypes, tested the adaptability of exotic material in various agroecological conditions, formulated the production agronomy, and successfully developed the complete value chain. He has developed prototypes (fabric, denim, knitwear) using hemp fibre in collaboration with industrial partners and has optimized the indigenous fibre processing techniques. In this lecture, Dr. Hussain will talk on hemp agronomy and its complete fibre value chain. He will discuss the current progress, and will highlight the major challenges and future research direction on hemp research.Keywords: industrial hemp, agricultural sustainability, agronomic evaluation, hemp value chain
Procedia PDF Downloads 81244 Fast-Tracking University Education for Youth Employment: Empirical Evidence from University Graduates in Rwanda
Authors: Fred Alinda, Marjorie Negesa, Gerald Karyeija
Abstract:
Like elsewhere in the world, youth unemployment remains a big problem more so to the most educated youth and female. In Rwanda, unemployment is estimated at 13.2% among youth graduates compared to 10.9% and 2.6 among secondary and primary graduates respectively. Though empirical evidence elsewhere associate youth unemployment with education level, relevance of skills and access to business support opportunities, mixed evidence still exist on the significance of these factors to youth employment. As youth employment strategies in countries like Rwanda continue to recognize the potential role university education can play to enhance employment, there is a need to understand the catalysts or barriers. This paper, therefore, draws empirical evidence from a survey on the influence of education qualification, skills relevance and access to business support opportunities on employment of the youth university graduates in Masaka sector, Rwanda. The analysis tested four hypotheses; access to university education significantly affects youth employment, Relevance of university education significantly contributes to youth employment; access to business support opportunities significantly contributes to youth employment, and significant gender differences exist in the employment of youth university graduates. A cross-section survey was used in lieu of the need to explore the prevailing status of youth employment and contributing factors across the sector. A questionnaire was used to collect data on a large sample of 269 youth to allow statistical analysis. This was beefed up with qualitative views of leaders and technical officials in the sector. The youth University graduates were selected using simple random sampling while the leaders and technical officials were selected purposively. Percentages were used to describe respondents in line with the variables under while a regression model for youth employment was fitted to determine the significant factors. The model results indicated a significant influence (p<0.05) of gender, education level and access to business support opportunities on employment of youth university graduates. This finding was also affirmed by the qualitative views of key informants. Qualitative views pointed to the fact that university education generally equipped the youth with skills that enabled their transition into employment mainly for a salary or wage. The skills were, however, deficient in technical and practical aspects. In addition, the youth generally lacked limited access to business support opportunities particularly guarantees for loans, business advisory, and grants for business as well as training in business skills that would help them gain salaried employment or transit into self-employment. The study findings bear an implication on the strategy for catalyzing youth employment through university education. The findings imply that university education should be embraced but with greater emphasis on or supplementation with specialized training in practical and technical skills as well as extending business support opportunities to the youth. This will accelerate the contribution of university education to youth employment.Keywords: education, employment, self-employment, youth
Procedia PDF Downloads 256243 Graphic Narratives: Representations of Refugeehood in the Form of Illustration
Authors: Pauline Blanchet
Abstract:
In a world where images are a prominent part of our daily lives and a way of absorbing information, the analysis of the representation of migration narratives is vital. This thesis raises questions concerning the power of illustrations, drawings and visual culture in order to represent the migration narratives in the age of Instagram. The rise of graphic novels and comics has come about in the last fifteen years, specifically regarding contemporary authors engaging with complex social issues such as migration and refugeehood. Due to this, refugee subjects are often in these narratives, whether they are autobiographical stories or whether the subject is included in the creative process. Growth in discourse around migration has been present in other art forms; in 2018, there has been dedicated exhibitions around migration such as Tania Bruguera at the TATE (2018-2019), ‘Journeys Drawn’ at the House of Illustration (2018-2019) and dedicated film festivals (2018; the Migration Film Festival), which have shown the recent considerations of using the arts as a medium of expression regarding themes of refugeehood and migration. Graphic visuals are fast becoming a key instrument when representing migration, and the central thesis of this paper is to show the strength and limitations of this form as well the methodology used by the actors in the production process. Recent works which have been released in the last ten years have not being analysed in the same context as previous graphic novels such as Palestine and Persepolis. While a lot of research has been done on the mass media portrayals of refugees in photography and journalism, there is a lack of literature on the representation with illustrations. There is little research about the accessibility of graphic novels such as where they can be found and what the intentions are when writing the novels. It is interesting to see why these authors, NGOs, and curators have decided to highlight these migrant narratives in a time when the mainstream media has done extensive coverage on the ‘refugee crisis’. Using primary data by doing one on one interviews with artists, curators, and NGOs, this paper investigates the efficiency of graphic novels for depicting refugee stories as a viable alternative to other mass medium forms. The paper has been divided into two distinct sections. The first part is concerned with the form of the comic itself and how it either limits or strengthens the representation of migrant narratives. This will involve analysing the layered and complex forms that comics allow such as multimedia pieces, use of photography and forms of symbolism. It will also show how the illustration allows for anonymity of refugees, the empathetic aspect of the form and how the history of the graphic novel form has allowed space for positive representations of women in the last decade. The second section will analyse the creative and methodological process which takes place by the actors and their involvement with the production of the works.Keywords: graphic novel, refugee, communication, media, migration
Procedia PDF Downloads 117242 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 103241 Oxalate Method for Assessing the Electrochemical Surface Area for Ni-Based Nanoelectrodes Used in Formaldehyde Sensing Applications
Authors: S. Trafela, X. Xua, K. Zuzek Rozmana
Abstract:
In this study, we used an accurate and precise method to measure the electrochemically active surface areas (Aecsa) of nickel electrodes. Calculated Aecsa is really important for the evaluation of an electro-catalyst’s activity in electrochemical reaction of different organic compounds. The method involves the electrochemical formation of Ni(OH)₂ and NiOOH in the presence of adsorbed oxalate in alkaline media. The studies were carried out using cyclic voltammetry with polycrystalline nickel as a reference material and electrodeposited nickel nanowires, homogeneous and heterogeneous nickel films. From cyclic voltammograms, the charge (Q) values for the formation of Ni(OH)₂ and NiOOH surface oxides were calculated under various conditions. At sufficiently fast potential scan rates (200 mV s⁻¹), the adsorbed oxalate limits the growth of the surface hydroxides to a monolayer. Although the Ni(OH)₂/NiOOH oxidation peak overlaps with the oxygen evolution reaction, in the reverse scan, the NiOOH/ Ni(OH)₂ reduction peak is well-separated from other electrochemical processes and can be easily integrated. The values of these integrals were used to correlate experimentally measured charge density with an electrochemically active surface layer. The Aecsa of the nickel nanowires, homogeneous and heterogeneous nickel films were calculated to be Aecsa-NiNWs = 4.2066 ± 0.0472 cm², Aecsa-homNi = 1.7175 ± 0.0503 cm² and Aecsa-hetNi = 2.1862 ± 0.0154 cm². These valuable results were expanded and used in electrochemical studies of formaldehyde oxidation. As mentioned nickel nanowires, heterogeneous and homogeneous nickel films were used as simple and efficient sensor for formaldehyde detection. For this purpose, electrodeposited nickel electrodes were modified in 0.1 mol L⁻¹ solution of KOH in order to expect electrochemical activity towards formaldehyde. The investigation of the electrochemical behavior of formaldehyde oxidation in 0.1 mol L⁻¹ NaOH solution at the surface of modified nickel nanowires, homogeneous and heterogeneous nickel films were carried out by means of electrochemical techniques such as cyclic voltammetric and chronoamperometric methods. From investigations of effect of different formaldehyde concentrations (from 0.001 to 0.1 mol L⁻¹) on electrochemical signal - current we provided catalysis mechanism of formaldehyde oxidation, detection limit and sensitivity of nickel electrodes. The results indicated that nickel electrodes participate directly in the electrocatalytic oxidation of formaldehyde. In the overall reaction, formaldehyde in alkaline aqueous solution exists predominantly in form of CH₂(OH)O⁻, which is oxidized to CH₂(O)O⁻. Taking into account the determined (Aecsa) values we have been able to calculate the sensitivities: 7 mA mol L⁻¹ cm⁻² for nickel nanowires, 3.5 mA mol L⁻¹ cm⁻² for heterogeneous nickel film and 2 mA mol L⁻¹ cm⁻² for heterogeneous nickel film. The detection limit was 0.2 mM for nickel nanowires, 0.5 mM for porous Ni film and 0.8 mM for homogeneous Ni film. All of these results make nickel electrodes capable for further applications.Keywords: electrochemically active surface areas, nickel electrodes, formaldehyde, electrocatalytic oxidation
Procedia PDF Downloads 161240 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217239 Evaluating the Teaching and Learning Value of Tablets
Authors: Willem J. A. Louw
Abstract:
The wave of new advanced computing technology that has been developed during the recent past has significantly changed the way we communicate, collaborate and collect information. It has created a new technology environment and paradigm in which our children and students grow-up and this impacts on their learning. Research confirmed that Generation Y students have a preference for learning in the new technology environment. The challenge or question is: How do we adjust our teaching and learning to make the most of these changes. The complexity of effective and efficient teaching and learning must not be underestimated and changes must be preceded by proper objective research to prevent any haphazard developments that could do more harm than benefit. A blended learning approach has been used in the Forestry department for a few numbers of years including the use of electronic-peer assisted learning (e-pal) in a fixed-computer set-up within a learning management system environment. It was decided to extend the investigation and do some exploratory research by using a range of different Tablet devices. For this purpose, learning activities or assignments were designed to cover aspects of communication, collaboration and collection of information. The Moodle learning management system was used to present normal module information, to communicate with students and for feedback and data collection. Student feedback was collected by using an online questionnaire and informal discussions. The research project was implemented in 2013, 2014 and 2015 amongst first and third-year students doing a forestry three-year technical tertiary qualification in commercial plantation management. In general, more than 80% of the students alluded to that the device was very useful in their learning environment while the rest indicated that the devices were not very useful. More than ninety percent of the students acknowledged that they would like to continue using the devices for all of their modules whilst the rest alluded to functioning efficiently without the devices. Results indicated that information collection (access to resources) was rated the highest advantageous factor followed by communication and collaboration. The main general advantages of using Tablets were listed by the students as being mobility (portability), 24/7 access to learning material and information of any kind on a user friendly device in a Wi-Fi environment, fast computing process speeds, saving time, effort and airtime through skyping and e-mail, and use of various applications. Ownership of the device is a critical factor while the risk was identified as a major potential constraint. Significant differences were reported between the different types and quality of Tablets. The preferred types are those with a bigger screen and the ones with overall better functionality and quality features. Tablets significantly increase the collaboration, communication and information collection needs of the students. It does, however, not replace the need of a computer/laptop because of limited storage and computation capacity, small screen size and inefficient typing.Keywords: tablets, teaching, blended learning, tablet quality
Procedia PDF Downloads 248238 Reagentless Detection of Urea Based on ZnO-CuO Composite Thin Film
Authors: Neha Batra Bali, Monika Tomar, Vinay Gupta
Abstract:
A reagentless biosensor for detection of urea based on ZnO-CuO composite thin film is presented in following work. Biosensors have immense potential for varied applications ranging from environmental to clinical testing, health care, and cell analysis. Immense growth in the field of biosensors is due to the huge requirement in today’s world to develop techniques which are both cost effective and accurate for prevention of disease manifestation. The human body comprises of numerous biomolecules which in their optimum levels are essential for functioning. However mismanaged levels of these biomolecules result in major health issues. Urea is one of the key biomolecules of interest. Its estimation is of paramount significance not only for healthcare sector but also from environmental perspectives. If level of urea in human blood/serum is abnormal, i.e., above or below physiological range (15-40mg/dl)), it may lead to diseases like renal failure, hepatic failure, nephritic syndrome, cachexia, urinary tract obstruction, dehydration, shock, burns and gastrointestinal, etc. Various metal nanoparticles, conducting polymer, metal oxide thin films, etc. have been exploited to act as matrix to immobilize urease to fabricate urea biosensor. Amongst them, Zinc Oxide (ZnO), a semiconductor metal oxide with a wide band gap is of immense interest as an efficient matrix in biosensors by virtue of its natural abundance, biocompatibility, good electron communication feature and high isoelectric point (9.5). In spite of being such an attractive candidate, ZnO does not possess a redox couple of its own which necessitates the use of electroactive mediators for electron transfer between the enzyme and the electrode, thereby causing hindrance in realization of integrated and implantable biosensor. In the present work, an effort has been made to fabricate a matrix based on ZnO-CuO composite prepared by pulsed laser deposition (PLD) technique in order to incorporate redox properties in ZnO matrix and to utilize the same for reagentless biosensing applications. The prepared bioelectrode Urs/(ZnO-CuO)/ITO/glass exhibits high sensitivity (70µAmM⁻¹cm⁻²) for detection of urea (5-200 mg/dl) with high stability (shelf life ˃ 10 weeks) and good selectivity (interference ˂ 4%). The enhanced sensing response obtained for composite matrix is attributed to the efficient electron exchange between ZnO-CuO matrix and immobilized enzymes, and subsequently fast transfer of generated electrons to the electrode via matrix. The response is encouraging for fabricating reagentless urea biosensor based on ZnO-CuO matrix.Keywords: biosensor, reagentless, urea, ZnO-CuO composite
Procedia PDF Downloads 290237 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 105236 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 238235 An Exploration of Policy-related Documents on District Heating and Cooling in Flanders: A Slow and Bottom-up Process
Authors: Isaura Bonneux
Abstract:
District heating and cooling (DHC) is increasingly recognized as a viable path towards sustainable heating and cooling. While some countries like Sweden and Denmark have a longstanding tradition of DHC, Belgium is lacking behind. The Northern part of Belgium, Flanders, had only a total of 95 heating networks in July 2023. Nevertheless, it is increasingly exploring its possibilities to enhance the scope of DHC. DHC is a complex energy system, requiring a lot of collaboration between various stakeholders on various levels. Therefore, it is of interest to look closer at policy-related documents at the Flemish (regional) level, as these policies set the scene for DHC development in the Flemish region. This kind of analysis has not been undertaken so far. This paper has the following research question: “Who talks about DHC, and in which way and context is DHC discussed in Flemish policy-related documents?” To answer this question, the Overton policy database was used to search and retrieve relevant policy-related documents. Overton retrieves data from governments, think thanks, NGOs, and IGOs. In total, out of the 244 original results, 117 documents between 2009 and 2023 were analyzed. Every selected document included theme keywords, policymaking department(s), date, and document type. These elements were used for quantitative data description and visualization. Further, qualitative content analysis revealed patterns and main themes regarding DHC in Flanders. Four main conclusions can be drawn: First, it is obvious from the timeframe that DHC is a new topic in Flanders with still limited attention; 2014, 2016 and 2017 were the years with the most documents, yet this number is still only 12 documents. In addition, many documents talked about DHC but not much in depth and painted it as a future scenario with a lot of uncertainty around it. The largest part of the issuing government departments had a link to either energy or climate (e.g. Flemish Environmental Agency) or policy (e.g. Socio-Economic Council of Flanders) Second, DHC is mentioned most within an ‘Environment and Sustainability’ context, followed by ‘General Policy and Regulation’. This is intuitive, as DHC is perceived as a sustainable heating and cooling technique and this analysis compromises policy-related documents. Third, Flanders seems mostly interested in using waste or residual heat as a heating source for DHC. The harbors and waste incineration plants are identified as potential and promising supply sources. This approach tries to conciliate environmental and economic incentives. Last, local councils get assigned a central role and the initiative is mostly taken by them. The policy documents and policy advices demonstrate that Flanders opts for a bottom-up organization. As DHC is very dependent on local conditions, this seems a logic step. Nevertheless, this can impede smaller councils to create DHC networks and slow down systematic and fast implementation of DHC throughout Flanders.Keywords: district heating and cooling, flanders, overton database, policy analysis
Procedia PDF Downloads 44234 Peculiarities of Snow Cover in Belarus
Authors: Aleh Meshyk, Anastasiya Vouchak
Abstract:
On the average snow covers Belarus for 75 days in the south-west and 125 days in the north-east. During the cold season snowpack often destroys due to thaws, especially at the beginning and end of winter. Over 50% of thawing days have a positive mean daily temperature, which results in complete snow melting. For instance, in December 10% of thaws occur at 4 С mean daily temperature. Stable snowpack lying for over a month forms in the north-east in the first decade of December but in the south-west in the third decade of December. The cover disappears in March: in the north-east in the last decade but in the south-west in the first decade. This research takes into account that precipitation falling during a cold season could be not only liquid and solid but also a mixed type (about 10-15 % a year). Another important feature of snow cover is its density. In Belarus, the density of freshly fallen snow ranges from 0.08-0.12 g/cm³ in the north-east to 0.12-0.17 g/cm³ in the south-west. Over time, snow settles under its weight and after melting and refreezing. Averaged annual density of snow at the end of January is 0.23-0.28 g/сm³, in February – 0.25-0.30 g/сm³, in March – 0.29-0.36 g/сm³. Sometimes it can be over 0.50 g/сm³ if the snow melts too fast. The density of melting snow saturated with water can reach 0.80 g/сm³. Average maximum of snow depth is 15-33 cm: minimum is in Brest, maximum is in Lyntupy. Maximum registered snow depth ranges within 40-72 cm. The water content in snowpack, as well as its depth and density, reaches its maximum in the second half of February – beginning of March. Spatial distribution of the amount of liquid in snow corresponds to the trend described above, i.e. it increases in the direction from south-west to north-east and on the highlands. Average annual value of maximum water content in snow ranges from 35 mm in the south-west to 80-100 mm in the north-east. The water content in snow is over 80 mm on the central Belarusian highland. In certain years it exceeds 2-3 times the average annual values. Moderate water content in snow (80-95 mm) is characteristic of western highlands. Maximum water content in snow varies over the country from 107 mm (Brest) to 207 mm (Novogrudok). Maximum water content in snow varies significantly in time (in years), which is confirmed by high variation coefficient (Cv). Maximums (0.62-0.69) are in the south and south-west of Belarus. Minimums (0.42-0.46) are in central and north-eastern Belarus where snow cover is more stable. Since 1987 most gauge stations in Belarus have observed a trend to a decrease in water content in snow. It is confirmed by the research. The biggest snow cover forms on the highlands in central and north-eastern Belarus. Novogrudok, Minsk, Volkovysk, and Sventayny highlands are a natural orographic barrier which prevents snow-bringing air masses from penetrating inside the country. The research is based on data from gauge stations in Belarus registered from 1944 to 2014.Keywords: density, depth, snow, water content in snow
Procedia PDF Downloads 161