Search results for: database forensics
331 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars
Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic
Abstract:
Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing
Procedia PDF Downloads 361330 In Silico Screening, Identification and Validation of Cryptosporidium hominis Hypothetical Protein and Virtual Screening of Inhibitors as Therapeutics
Authors: Arpit Kumar Shrivastava, Subrat Kumar, Rajani Kanta Mohapatra, Priyadarshi Soumyaranjan Sahu
Abstract:
Computational approaches to predict structure, function and other biological characteristics of proteins are becoming more common in comparison to the traditional methods in drug discovery. Cryptosporidiosis is a major zoonotic diarrheal disease particularly in children, which is caused primarily by Cryptosporidium hominis and Cryptosporidium parvum. Currently, there are no vaccines for cryptosporidiosis and recommended drugs are not effective. With the availability of complete genome sequence of C. hominis, new targets have been recognized for the development of effective and better drugs and/or vaccines. We identified a unique hypothetical epitopic protein in C. hominis genome through BLASTP analysis. A 3D model of the hypothetical protein was generated using I-Tasser server through threading methodology. The quality of the model was validated through Ramachandran plot by PROCHECK server. The functional annotation of the hypothetical protein through DALI server revealed structural similarity with human Transportin 3. Phylogenetic analysis for this hypothetical protein also showed C. hominis hypothetical protein (CUV04613) was the closely related to human transportin 3 protein. The 3D protein model is further subjected to virtual screening study with inhibitors from the Zinc Database by using Dock Blaster software. Docking study reported N-(3-chlorobenzyl) ethane-1,2-diamine as the best inhibitor in terms of docking score. Docking analysis elucidated that Leu 525, Ile 526, Glu 528, Glu 529 are critical residues for ligand–receptor interactions. The molecular dynamic simulation was done to access the reliability of the binding pose of inhibitor and protein complex using GROMACS software at 10ns time point. Trajectories were analyzed at each 2.5 ns time interval, among which, H-bond with LEU-525 and GLY- 530 are significantly present in MD trajectories. Furthermore, antigenic determinants of the protein were determined with the help of DNA Star software. Our study findings showed a great potential in order to provide insights in the development of new drug(s) or vaccine(s) for control as well as prevention of cryptosporidiosis among humans and animals.Keywords: cryptosporidium hominis, hypothetical protein, molecular docking, molecular dynamics simulation
Procedia PDF Downloads 365329 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet
Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel
Abstract:
Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network
Procedia PDF Downloads 226328 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 186327 The Impact of the Application of Blockchain Technology in Accounting and Auditing
Authors: Yusuf Adebayo Oduwole
Abstract:
The evaluation of blockchain technology's potential effects on the accounting and auditing fields is the main objective of this essay. It also adds to the existing body of work by examining how these practices alter technological concerns, including cryptocurrency accounting, regulation, governance, accounting practices, and technical challenges. Examples of this advancement include the growth of the concept of blockchain and its application in accounting. This technology is being considered one of the digital revolutions that could disrupt the world and civilization as it can transfer large volumes of virtual currencies like cryptocurrencies with the help of a third party. The basis for this research is a systematic review of the articles using Vosviewer to display and reflect on the bibliometric information of the articles accessible on the Scopus database. Also, as the practice of using blockchain technology in the field of accounting and auditing is still in its infancy, it may be useful to carry out a more thorough analysis of any implications for accounting and auditing regarding aspects of governance, regulation, and cryptocurrency that have not yet been discussed or addressed to any significant extent. The main findings on the relationship between blockchain and accounting show that the application of smart contracts, such as triple-entry accounting, has increased the quality of accounting records as well as reliance on the information available. This results in fewer cyclical assignments, no need for resolution, and real-time accounting, among others. Thereby, to integrate blockchain through a computer system, one must continuously learn and remain naive when using blockchain-integrated accounting software. This includes learning about how cryptocurrencies are accounted for and regulated. In this study, three original and contributed efforts are presented. To offer a transparent view of the state of previous relevant studies and research works in accounting and auditing that focus on blockchain, it begins by using bibliographic visibility analysis and a Scopus narrative analysis. Second, it highlights legislative, governance, and ethical concerns, such as education, where it tackles the use of blockchain in accounting and auditing. Lastly, it examines the impact of blockchain technologies on the accounting recognition of cryptocurrencies. Users of the technology should, therefore, take their time and learn how it works, as well as keep abreast of the different developments. In addition, the accounting industry must integrate blockchain certification and practice, most likely offline or as part of university education for those intending to become auditors or accountants.Keywords: blockchain, crypto assets, governance, regulation & smart contracts
Procedia PDF Downloads 30326 Web Map Service for Fragmentary Rockfall Inventory
Authors: M. Amparo Nunez-Andres, Nieves Lantada
Abstract:
One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.Keywords: geological risk, web mapping, WMS, rockfalls
Procedia PDF Downloads 160325 A Semi-Markov Chain-Based Model for the Prediction of Deterioration of Concrete Bridges in Quebec
Authors: Eslam Mohammed Abdelkader, Mohamed Marzouk, Tarek Zayed
Abstract:
Infrastructure systems are crucial to every aspect of life on Earth. Existing Infrastructure is subjected to degradation while the demands are growing for a better infrastructure system in response to the high standards of safety, health, population growth, and environmental protection. Bridges play a crucial role in urban transportation networks. Moreover, they are subjected to high level of deterioration because of the variable traffic loading, extreme weather conditions, cycles of freeze and thaw, etc. The development of Bridge Management Systems (BMSs) has become a fundamental imperative nowadays especially in the large transportation networks due to the huge variance between the need for maintenance actions, and the available funds to perform such actions. Deterioration models represent a very important aspect for the effective use of BMSs. This paper presents a probabilistic time-based model that is capable of predicting the condition ratings of the concrete bridge decks along its service life. The deterioration process of the concrete bridge decks is modeled using semi-Markov process. One of the main challenges of the Markov Chain Decision Process (MCDP) is the construction of the transition probability matrix. Yet, the proposed model overcomes this issue by modeling the sojourn times based on some probability density functions. The sojourn times of each condition state are fitted to probability density functions based on some goodness of fit tests such as Kolmogorov-Smirnov test, Anderson Darling, and chi-squared test. The parameters of the probability density functions are obtained using maximum likelihood estimation (MLE). The condition ratings obtained from the Ministry of Transportation in Quebec (MTQ) are utilized as a database to construct the deterioration model. Finally, a comparison is conducted between the Markov Chain and semi-Markov chain to select the most feasible prediction model.Keywords: bridge management system, bridge decks, deterioration model, Semi-Markov chain, sojourn times, maximum likelihood estimation
Procedia PDF Downloads 216324 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 205323 Optimization of Fermentation Conditions for Extracellular Production of the Oncolytic Enzyme, L-Asparaginase, by New Subsp. Streptomyces Rochei Subsp. Chromatogenes NEAE-K Using Response Surface Methodology under Solid State Fermentation
Authors: Noura El-Ahmady El-Naggar
Abstract:
L-asparaginase is an important enzyme as therapeutic agents used in combination therapy with other drugs in the treatment of acute lymphoblastic leukemia in children. L-asparaginase producing actinomycete strain, NEAE-K, was isolated from soil sample and identified on the basis of morphological, cultural, physiological and biochemical properties, together with 16S rDNA sequence as new subsp. Streptomyces rochei subsp. chromatogenes NEAE-K and sequencing product (1532 bp) was deposited in the GenBank database under accession number KJ200343. The study was conducted to screen parameters affecting the production of L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K on solid state fermentation using Plackett–Burman experimental design. Sixteen different independent variables including incubation time, moisture content, inoculum size, temperature, pH, soybean meal+ wheat bran, dextrose, fructose, L-asparagine, yeast extract, KNO3, K2HPO4, MgSO4.7H2O, NaCl, FeSO4. 7H2O, CaCl2, and three dummy variables were screened in Plackett–Burman experimental design of 20 trials. The most significant independent variables affecting enzyme production (dextrose, L-asparagine and K2HPO4) were further optimized by the central composite design. As a result, a medium of the following formula is the optimum for producing an extracellular L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K from solid state fermentation: g/L (soybean meal+ wheat bran 15, dextrose 3, fructose 4, L-asparagine 8, yeast extract 2, KNO3 1, K2HPO4 2, MgSO4.7H2O 0.5, NaCl 0.1, FeSO4. 7H2O 0.02, CaCl2 0.01), incubation time 7 days, moisture content 50%, inoculum size 3 mL, temperature 30°C, pH 8.5.Keywords: streptomyces rochei subsp. chromatogenes neae-k, 16s rrna, identification, solid state fermentation, l-asparaginase production, plackett-burman design, central composite design
Procedia PDF Downloads 408322 Use of Telehealth for Facilitating the Diagnostic Assessment of Autism Spectrum Disorder: A Scoping Review
Authors: Manahil Alfuraydan, Jodie Croxall, Lisa Hurt, Mike Kerr, Sinead Brophy
Abstract:
Autism Spectrum Disorder (ASD) is a developmental condition characterised by impairment in terms of social communication, social interaction, and a repetitive or restricted pattern of interest, behaviour, and activity. There is a significant delay between seeking help and a confirmed diagnosis of ASD. This may result in delay in receiving early intervention services, which are critical for positive outcomes. The long wait times also cause stress for the individuals and their families. Telehealth potentially offers a way of improving the diagnostic pathway for ASD. This review of the literature aims to examine which telehealth approaches have been used in the diagnosis and assessment of autism in children and adults, whether they are feasible and acceptable, and how they compare with face-to-face diagnosis and assessment methods. A comprehensive search of following databases- MEDLINE, CINAHL Plus with Full text, Business Sources Complete, Web of Science, Scopus, PsycINFO and trail and systematic review databases including Cochrane Library, Health Technology Assessment, Database of Abstracts and Reviews of Effectiveness and NHS Economic Evaluation was conducted, combining the terms of autism and telehealth from 2000 to 2018. A total of 10 studies were identified for inclusion in the review. This review of the literature found there to be two methods of using telehealth: (a) video conferencing to enable teams in different areas to consult with the families and to assess the child/adult in real time and (b) a video upload to a web portal that enables the clinical assessment of behaviours in the family home. The findings were positive, finding there to be high agreement in terms of the diagnosis between remote methods and face to face methods and with high levels of satisfaction among the families and clinicians. This field is in the very early stages, and so only studies with small sample size were identified, but the findings suggest that there is potential for telehealth methods to improve assessment and diagnosis of autism used in conjunction with existing methods, especially for those with clear autism traits and adults with autism. Larger randomised controlled trials of this technology are warranted.Keywords: assessment, autism spectrum disorder, diagnosis, telehealth
Procedia PDF Downloads 130321 The Diurnal and Seasonal Relationships of Pedestrian Injuries Secondary to Motor Vehicles in Young People
Authors: Amina Akhtar, Rory O'Connor
Abstract:
Introduction: There remains significant morbidity and mortality in young pedestrians hit by motor vehicles, even in the era of pedestrian crossings and speed limits. The aim of this study was to compare incidence and injury severity of motor vehicle-related pedestrian trauma according to time of day and season in a young population, based on the supposition that injuries would be more prevalent during dusk and dawn and during autumn and winter. Methods: Data was retrieved for patients between 10-25 years old from the National Trauma Audit and Research Network (TARN) database who had been involved as pedestrians in motor vehicle accidents between 2015-2020. The incidence of injuries, their severity (using the Injury Severity Score [ISS]), hospital transfer time, and mortality were analysed according to the hours of daylight, darkness, and season. Results: The study identified a seasonal pattern, showing that autumn was the predominant season and led to 34.9% of injuries, with a further 25.4% in winter in comparison to spring and summer, with 21.4% and 18.3% of injuries, respectively. However, visibility alone was not a sufficient factor as 49.5% of injuries occurred during the time of darkness, while 50.5% occurred during daylight. Importantly, the greatest injury rate (number of injuries/hour) occurred between 1500-1630, correlating to school pick-up times. A further significant relationship between injury severity score (ISS) and daylight was demonstrated (p-value= 0.0124), with moderate injuries (ISS 9-14) occurring most commonly during the day (72.7%) and more severe injuries (ISS>15) occurred during the night (55.8%). Conclusion: We have identified a relationship between time of day and the frequency and severity of pedestrian trauma in young people. In addition, particular time groupings correspond to the greatest injury rate, suggesting that reduced visibility coupled with school pick-up times may play a significant role. This could be addressed through a targeted public health approach to implementing change. We recommend targeted public health measures to improve road safety that focus on these times and that increase the visibility of children combined with education for drivers.Keywords: major trauma, paediatric trauma, road traffic accidents, diurnal pattern
Procedia PDF Downloads 102320 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)
Procedia PDF Downloads 311319 Exploratory Case Study: Judicial Discretion and Political Statements Transforming the Actions of the Commissioner for the South African Revenue Service
Authors: Werner Roux Uys
Abstract:
The Commissioner for the South African Revenue Service (SARS) holds a high position of trust in South African society and a lack of trust by taxpayers in the Commissioner’s actions or conduct could compromise SARS’ management of public finances. Tax morality – which is implicit in the social contract between taxpayers and the state – includes distinct phenomena that can cause a breakdown if there is a perceived lack of action on the part of the Commissioner to ensure public finances are kept safe. To promote tax morality, the Commissioner must support the judiciary in the exercise of its discretion to punish fraudulent tax activities and corrupt tax practices. For several years the political meddling in the Commissioner’s actions and conduct have caused perceived abuse of power at SARS, and taxpayers believed their hard-earned income paid over to SARS would be fruitless and wasteful expenditure. The purpose of this article is to identify and analyse previous decisions held by the South African judiciary regarding the Commissioner’s actions and conduct in tax matters, as well as consider important political statements and newspaper bulletins for the purpose of this research. The study applies a qualitative research approach and exploratory case study technique. Keywords were selected and inserted in the LexisNexis electronic database to systematically identify applicable case law where the ratio decidendi of the court referred to the actions and/or conduct of the Commissioner. Specific real-life statements, including political statements and newspaper bulletins, were selected to support the topic at hand. The purpose of the study is to educate the public about the perceptions that have transformed taxpayers’ behaviour towards the Commissioner for SARS since South Africa’s fledgling constitutional democracy was inaugurated in 1994. The study adds to the literature by identifying key characteristics or distinct phenomena regarding the actions and conduct of the Commissioner affecting taxpayers’ behaviour, including discretionary decision-making. From the findings, it emerged that SARS must abide by its (own) laws and that there is a need to educate not only South African taxpayers about tax morality, but also the public in general.Keywords: commissioner, SARS, action and conduct, judiciary, discretionry, decsion-making
Procedia PDF Downloads 69318 Orthopedic Trauma in Newborn Babies
Authors: Joanna Maj, Awais Hussain, Lyndsey Vu, Catherine Roxas
Abstract:
Background: Bone injuries in babies are common conditions that arise during delivery. Fractures of the clavicle, humerus, femur, and skull are the most common neonatal bone injuries sustained from labor and delivery. During operative deliveries, zealous tractions, ineffective delivery techniques, improper uterine incision, and inadequate relaxation of the uterus can lead to bone fractures in the newborn. Neonatal anatomy is unique. Just as children are not mini-adults, newborns are not mini children. A newborn’s anatomy and physiology are significantly different from a pediatric patient's. In this paper, we describe common orthopedic trauma in newborn babies. We provide a comprehensive overview of the different types of bone injuries in newborns. We hypothesize that the rate of bone fractures sustained at birth is higher in cases of operative deliveries. Methods: Relevant literature was selected by using the PubMed database. Search terms included orthopedic conditions in newborns, neonatal anatomy, and bone fractures in neonates during operative deliveries. Inclusion criteria included age, gender, race, type of bone injury and progression of bone injury. Exclusion criteria were limited in the medical history of cases reviewed and comorbidities. Results: This review finds that a clavicle fracture is the most common type of neonatal orthopedic injury sustained at birth in both operative and non-operative deliveries. We confirm the hypothesis that infants born via operative deliveries have a significantly higher rate of bone fractures than non-cesarean section deliveries. Conclusion: Newborn babies born via operative deliveries have a higher rate of bone fractures of the clavicle, humerus, and femur. A clavicle bone fracture in newborns is most common during emergency operative deliveries in new mothers. We conclude that infants born via an operative delivery sustained more bone injuries than infants born via non-cesarean section deliveries.Keywords: clavicle fracture, humerus fracture, neonates, newborn orthopedics, orthopedic surgery, pediatrics, orthopedic trauma, orthopedic trauma during delivery, cesarean section, obstetrics, neonatal anatomy, neonatal fractures, operative deliveries, labor and delivery, bone injuries in neonates
Procedia PDF Downloads 104317 Transient Level in the Surge Chamber at the Robert-bourassa Generating Station
Authors: Maryam Kamali Nezhad
Abstract:
The Robert-Bourassa development (LG-2), the first to be built on the Grande Rivière, comprises two sets of eight turbines- generator units each, the East and West powerhouses. Each powerhouse has two tailrace tunnels with an average length of about 1178 m. The LG-2A powerhouse houses 6 turbine-generator units. The water is discharged through two tailrace tunnels with a length of about 1330 m. The objective of this work, at RB (LG-2), is; 1) to establish a new maximum transient level in the surge chamber, 2) to define the new maximum equipment flow rate for the future turbine-generator units, 3) to ensure safe access to various intervention locations in the surge chamber. The transient levels under normal operating conditions at the RB plant were determined in 2001 by the Hydraulics Unit of HQE using the "Chamber" software. It is a one-dimensional mass oscillation calculation software; it is used to determine the variation of the water level in the equilibrium chamber located downstream of a power plant during the load shedding of the power plant units; it can also be used in the case of an equilibrium stack upstream of a power plant. The RB (LG-2) plant study is based on the theoretical nominal geometry of the chamber and the tailrace tunnels and the flow-level relationship at the outlet of the galleries established during design. The software is used in such a way that the results have an acceptable margin of safety, especially with respect to the maximum transient level (e.g., resumption of flow at an inopportune time), to take into account the turbulent and three-dimensional aspects of the actual flow in the chamber. Note that the transient levels depend on the water levels in the river and in the steady-state equilibrium chambers. These data are established in the HQP CRP database and updated from time to time. The maximum transient levels in the RB-East and RB-West powerhouses surge chamber were revised based on the latest update (set 4) of in-river rating curves and steady-state surge chamber water levels. The results of the revision were also used to update the technical advice on the operating conditions for the aforementioned surge chamber access while considering revisions to the calculated water levels.Keywords: generating station, surge chamber, maximum transient level, hydroelectric power station, turbine-generator, reservoir
Procedia PDF Downloads 86316 Application of Social Media for Promoting Library and Information Services: A Case Study of Library Science Professionals of India
Authors: Payel Saha
Abstract:
Social media is playing an important role for dissemination of information in society. In 21st century most people have a smart phone and used different social media tools like Facebook, Twitter, Instagram, WhatsApp, Skype etc. in day to day life. It is rapidly growing web-based tool for everyone to share thoughts, ideas and knowledge globally using internet. The study highlights the current use of social media tools for promoting library and information services of Library and Information Professionals of India, which are working in Library. The study was conducted during November, 2017. A structured questionnaire was prepared using google docs and shared using different mailing list, sent to individual email IDs and sharing with other social media tools. Only 90 responses received from the different states of India and analyzed via MS-Excel. The data receive from 17 states and 3 union territories of India; however most of the respondents has come from the states Odisha 23, Himachal Pradesh 14 and Assam 10. The results revealed that out 90 respondents 37 Female and 53 male categories and also majority of respondents 71 have come from academic library followed by special library 15, Public library 3 and corporate library 1 respondent. The study indicates that, out of 90 respondent’s majority of 53 of respondents said that their Library have a social media account while 39 of respondents have not their Library social media account. The study also inform that Facebook, YouTube, Google+, LinkedIn, Twitter and Instagram are using by the LIS professional of India and Facebook 86 was popular social media tool among the other social media tools. Furthermore, respondent reported that they are using social media tools for sharing photos of events and programs of library 72, followed by tips for using different services 64, posting of new arrivals 56, tutorials of database 35 and send brief updates to patrons 32, announcement of library holidays 22. It was also reported by respondents that they are sharing information about scholarships training programs and marketing of library events etc. The study furthermore identify that lack of time is the major problem while using social media with 53 of respondents followed by low speed of internet 35, too many social media tools to learn 17 and some 3 respondents reported that there is no problem while using social media tools. The results also revealed that, majority of the respondents reported that they are using social media tools in daily basis 71 followed by weekly basis 16. It was followed by monthly 1 respondent and other 2 of the respondents. In summary, this study is expected to be useful in further promoting the social media for dissemination of library and information services to the general public.Keywords: application of social media, India, promoting library services, library professionals
Procedia PDF Downloads 165315 Research on Quality Assurance in African Higher Education: A Bibliometric Mapping from 1999 to 2019
Authors: Luís M. João, Patrício Langa
Abstract:
The article reviews the literature on quality assurance (QA) in African higher education studies (HES) conducted through a bibliometric mapping of published papers between 1999 and 2019. Specifically, the article highlights the nuances of knowledge production in four scientific databases: Scopus, Web of Science (WoS), African Journal Online (AJOL), and Google Scholar. The analysis included 531 papers, of which 127 are from Scopus, 30 are from Web of Science, 85 are from African Journal Online, and 259 are from Google Scholar. In essence, 284 authors wrote these papers from 231 institutions and 69 different countries (i.e., Africa=54 and outside Africa=15). Results indicate the existing knowledge. This analysis allows the readers to understand the growth and development of the field during the two-decade period, identify key contributors, and observe potential trends or gaps in the research. The paper employs bibliometric mapping as its primary analytical lens. By utilizing this method, the study quantitatively assesses the publications related to QA in African HES, helping to identify patterns, collaboration networks, and disparities in research output. The bibliometric approach allows for a systematic and objective analysis of large datasets, offering a comprehensive view of the knowledge production in the field. Furthermore, the study highlights the lack of shared resources available to enhance quality in higher education institutions (HEIs) in Africa. This finding underscores the importance of promoting collaborative research efforts, knowledge exchange, and capacity building within the region to improve the overall quality of higher education. The paper argues that despite the growing quantity of QA research in African higher education, there are challenges related to citation impact and access to high-impact publication avenues for African researchers. It emphasises the need to promote collaborative research and resource-sharing to enhance the quality of HEIs in Africa. The analytical lenses of bibliometric mapping and the examination of publication players' scenarios contribute to a comprehensive understanding of the field and its implications for African higher education.Keywords: Africa, bibliometric research, higher education studies, quality assurance, scientific database, systematic review
Procedia PDF Downloads 44314 Re-Victimization of Sex Trafficking Victims in Canada: Literature Review
Authors: Adrianna D. Hendricks
Abstract:
This paper examines the factors that contribute to the re-traumatization of victims of sex trafficking within the Canadian context. Sex trafficking occurring domestically in Canada is severely under-researched, stigmatized, and under-prosecuted, leading to the re-traumatization of victims by various levels of government. This is in part due to the Canadian criminal justice system unethically utilizing prostitution laws in cases of sex trafficking and partially due to the unaddressed stigmatization victims face within the justice system itself. Utilizing evidence from a current literature review, personal correspondence, and personal life experiences, this paper will demonstrate the need for victim involvement in policy reform. The current literature review was done through an academic database search using the terms: “Sex Trafficking, Exploitation, Canada”, with the limitation of articles written within the last 5 years and written within the Canadian context. Overall, from the results, only eight articles precisely matched the criteria. The current literature argues strongly and unanimously for more research and education of professionals who have close contact with high-risk populations (doctors, police officers, social workers, etc.) to protect both minors and adults from being sexually trafficked. Additionally, for women and girls who do not have Canadian citizenship, the fear of deportation becomes a barrier to disclosing exploitation experiences to professionals. There is a desperate need for more research done in tandem with survivors and victims to inform policymaking in a meaningful way. The researcher is a survivor of sex trafficking both as a youth and as an adult, giving the researcher a unique insight into the realities of the criminal justice system for victims of sex trafficking. There is a clear need for professionals in positions of power to be re-educated about the realities of sex-trafficking, and what it means for the victims. Congruent to the current research the author calls for: standardized professional training for people in healthcare, police officers, court officials, and victim services; with the additional layers of victim involvement in creation of professional education training, and victim involvement in research. Justice for victims/survivors can only be obtained if they have been consulted and believed. Without meaningful consultation with survivors, victims who are both minors and adults will continue to fall through the cracks in policy.Keywords: Canadian policy, re-traumatization, sex-trafficking, stigmatization
Procedia PDF Downloads 71313 Maternal and Neonatal Outcome: Comparison between Adolescents and Adult Pregnancy at Selected Hospital, Hetauda, Nepal
Authors: Laxmi Paudyal, Indira Adhikari Poudel, Muna Bhattarai
Abstract:
Introduction: Numerous factors can affect how pregnancies and births turn out. One of these is adolescent pregnancy, which is a worldwide issue with known causes, harmful impacts on both the mother's and the child's health, as well as various negative social and economic consequences. Objective: The study was carried out to compare the maternal and neonatal outcomes between adolescents and adult pregnancy. Methods: This retrospective hospital-based cohort study was conducted at Madan Bhandari Academy of Health of Health Sciences, Hetauda Hospital at Makwanpur. The study population was pregnant women who delivered at selected hospital within 1 year study period from 2079 Shrawan (July 2022) to 2080 Ashad (June 2023). A total of 479 mothers aged 20-30 years and 53 mothers aged 15-19 years were study participants, and they were selected using a simple random sampling lottery method. Data were collected from the hospital’s electronic database and the register maintained at the maternity ward and neonatal ward. Result: The findings indicate that 6.51% of the 3273 mothers who gave birth in a single year were in the adolescent age range. When comparing the two mother groups, more adult mothers than teenage mothers skipped the complete antenatal checkup. Compared to adult mothers, the mothers of adolescents were found to be underweight and to have less iron and folic acid supplement intake. Anaemia, UTI, and placental abnormalities during pregnancy have been reported by a greater percentage of teenage mothers than adult mothers, with p=0.032, p=0.025, and p=0.041, respectively. When compared to adult pregnancies, vaginal delivery and complicated delivery were both shown to be more common in teenage pregnancies (p=0.001 and p=0.012, respectively). Adolescent pregnancies were associated with higher rates of NICU admission (p=0.037), low birth weight (p=0.034), premature birth (p=0.001), and fetal deaths (p=0.024) than adult pregnancies. Conclusion: According to this study, there are some notable variations in obstetric and neonatal outcomes by the age of the mother. It was discovered that there were a considerable number of adverse effects on adolescent mothers both during their pregnancies and after giving birth. Need for strategic planning in preventing adolescent females from getting pregnant is recommended.Keywords: adolescent, antenatal, natal, postnatal, neonatal, outcome
Procedia PDF Downloads 4312 Using Life Cycle Assessment in Potable Water Treatment Plant: A Colombian Case Study
Authors: Oscar Orlando Ortiz Rodriguez, Raquel A. Villamizar-G, Alexander Araque
Abstract:
There is a total of 1027 municipal development plants in Colombia, 70% of municipalities had Potable Water Treatment Plants (PWTPs) in urban areas and 20% in rural areas. These PWTPs are typically supplied by surface waters (mainly rivers) and resort to gravity, pumping and/or mixed systems to get the water from the catchment point, where the first stage of the potable water process takes place. Subsequently, a series of conventional methods are applied, consisting in a more or less standardized sequence of physicochemical and, sometimes, biological treatment processes which vary depending on the quality of the water that enters the plant. These processes require energy and chemical supplies in order to guarantee an adequate product for human consumption. Therefore, in this paper, we applied the environmental methodology of Life Cycle Assessment (LCA) to evaluate the environmental loads of a potable water treatment plant (PWTP) located in northeastern Colombia following international guidelines of ISO 14040. The different stages of the potable water process, from the catchment point through pumping to the distribution network, were thoroughly assessed. The functional unit was defined as 1 m³ of water treated. The data were analyzed through the database Ecoinvent v.3.01, and modeled and processed in the software LCA-Data Manager. The results allowed determining that in the plant, the largest impact was caused by Clarifloc (82%), followed by Chlorine gas (13%) and power consumption (4%). In this context, the company involved in the sustainability of the potable water service should ideally reduce these environmental loads during the potable water process. A strategy could be the use of Clarifloc can be reduced by applying coadjuvants or other coagulant agents. Also, the preservation of the hydric source that supplies the treatment plant constitutes an important factor, since its deterioration confers unfavorable features to the water that is to be treated. By concluding, treatment processes and techniques, bioclimatic conditions and culturally driven consumption behavior vary from region to region. Furthermore, changes in treatment processes and techniques are likely to affect the environment during all stages of a plant’s operation cycle.Keywords: climate change, environmental impact, life cycle assessment, treated water
Procedia PDF Downloads 227311 Identification and Molecular Profiling of A Family I Cystatin Homologue from Sebastes schlegeli Deciphering Its Putative Role in Host Immunity
Authors: Don Anushka Sandaruwan Elvitigala, P. D. S. U. Wickramasinghe, Jehee Lee
Abstract:
Cystatins are a large superfamily of proteins which act as reversible inhibitors of cysteine proteases. Papain proteases and cysteine cathepsins are predominant substrates of cystatins. Cystatin superfamily can be further clustered into three groups as Stefins, Cystatins, and Kininogens. Among them, stefines are also known as family 1 cystatins which harbors cystatin Bs and cystatin As. In this study, a homologue of family one cystatins more close to cystatin Bs was identified from Korean black rockfish (Sebastes schlegeli) using a prior constructed cDNA (complementary deoxyribonucleic acid) database and designated as RfCyt1. The full-length cDNA of RfCyt1 consisted of 573 bp, with a coding region of 294 bp. It comprised a 5´-untranslated region (UTR) of 55 bp, and 3´-UTR of 263 bp. The coding sequence encodes a polypeptide consisting of 97 amino acids with a predicted molecular weight of 11kDa and theoretical isoelectric point of 6.3. The RfCyt1 shared homology with other teleosts and vertebrate species and consisted conserved features of cystatin family signature including single cystatin-like domain, cysteine protease inhibitory signature of pentapeptide (QXVXG) consensus sequence and N-terminal two conserved neighboring glycine (⁸GG⁹) residues. As expected, phylogenetic reconstruction developed using the neighbor-joining method showed that RfCyt1 is clustered with the cystatin family 1 members, in which more closely with its teleostan orthologues. An SYBR Green qPCR (quantitative polymerase chain reaction) assay was performed to quantify the RfCytB transcripts in different tissues in healthy and immune stimulated fish. RfCyt1 was ubiquitously expressed in all tissue types of healthy animals with gill and spleen being the highest. Temporal expression of RfCyt1 displayed significant up-regulation upon infection with Aeromonas salmonicida. Recombinantly expressed RfCyt1 showed concentration-dependent papain inhibitory activity. Collectively these findings evidence for detectable protease inhibitory and immunity relevant roles of RfCyt1 in Sebastes schlegeli.Keywords: Sebastes schlegeli, family 1 cystatin, immune stimulation, expressional modulation
Procedia PDF Downloads 137310 Demand-Side Financing for Thai Higher Education: A Reform Towards Sustainable Development
Authors: Daral Maesincee, Jompol Thongpaen
Abstract:
Thus far, most of the decisions made within the walls of Thai higher education (HE) institutions have primarily been supply-oriented. With the current supply-driven, itemized HE financing systems, the nation is struggling to systemically produce high-quality manpower that serves the market’s needs, often resulting in education mismatches and unemployment – particularly in science, technology, and innovation (STI)-related fields. With the COVID-19 pandemic challenges widening the education inequality (accessibility and quality) gap, HE becomes even more unobtainable for underprivileged students, permanently leaving some out of the system. Therefore, Thai HE needs a new financing system that produces the “right people” for the “right occupations” through the “right ways,” regardless of their socioeconomic backgrounds, and encourages the creation of non-degree courses to tackle these ongoing challenges. The “Demand-Side Financing for Thai Higher Education” policy aims to do so by offering a new paradigm of HE resource allocation via two main mechanisms: i) standardized formula-based unit-cost subsidizations that is specific to each study field and ii) student loan programs that respond to the “demand signals” from the labor market and the students, that are in line with the country’s priorities. Through in-dept reviews, extensive studies, and consultations with various experts, education committees, and related agencies, i) the method of demand signal analysis is identified, ii) the unit-cost of each student in the sample study fields is approximated, iii) the method of budget analysis is formulated, iv) the interagency workflows are established, and v) a supporting information database is created to suggest the number of graduates each HE institution can potentially produce, the study fields and skillsets that are needed by the labor market, the employers’ satisfaction with the graduates, and each study field’s employment rates. By responding to the needs of all stakeholders, this policy is expected to steer Thai HE toward producing more STI-related manpower in order to uplift Thai people’s quality of life and enhance the nation’s global competitiveness. This policy is currently in the process of being considered by the National Education Transformation Committee and the Higher Education Commission.Keywords: demand-side financing, higher education resource, human capital, higher education
Procedia PDF Downloads 203309 Influence of Nanomaterials on the Properties of Shape Memory Polymeric Materials
Authors: Katielly Vianna Polkowski, Rodrigo Denizarte de Oliveira Polkowski, Cristiano Grings Herbert
Abstract:
The use of nanomaterials in the formulation of polymeric materials modifies their molecular structure, offering an infinite range of possibilities for the development of smart products, being of great importance for science and contemporary industry. Shape memory polymers are generally lightweight, have high shape recovery capabilities, they are easy to process and have properties that can be adapted for a variety of applications. Shape memory materials are active materials that have attracted attention due to their superior damping properties when compared to conventional structural materials. The development of methodologies capable of preparing new materials, which use graphene in their structure, represents technological innovation that transforms low-cost products into advanced materials with high added value. To obtain an improvement in the shape memory effect (SME) of polymeric materials, it is possible to use graphene in its composition containing low concentration by mass of graphene nanoplatelets (GNP), graphene oxide (GO) or other functionalized graphene, via different mixture process. As a result, there was an improvement in the SME, regarding the increase in the values of maximum strain. In addition, the use of graphene contributes to obtaining nanocomposites with superior electrical properties, greater crystallinity, as well as resistance to material degradation. The methodology used in the research is Systematic Review, scientific investigation, gathering relevant studies on influence of nanomaterials on the properties of shape memory polymeric, using the literature database as a source and study methods. In the present study, a systematic reviewwas performed of all papers published from 2014 to 2022 regarding graphene and shape memory polymeric througha search of three databases. This study allows for easy identification of themost relevant fields of study with respect to graphene and shape memory polymeric, as well as the main gaps to beexplored in the literature. The addition of graphene showed improvements in obtaining higher values of maximum deformation of the material, attributed to a possible slip between stacked or agglomerated nanostructures, as well as an increase in stiffness due to the increase in the degree of phase separation that results in a greater amount physical cross-links, referring to the formation of shortrange rigid domains.Keywords: graphene, shape memory, smart materials, polymers, nanomaterials
Procedia PDF Downloads 86308 From Waste Recycling to Waste Prevention by Households : Could Eco-Feedback Strategies Fill the Gap?
Authors: I. Dangeard, S. Meineri, M. Dupré
Abstract:
large body of research on energy consumption reveals that regular information on energy consumption produces a positive effect on behavior. The present research aims to test this feedback paradigm on waste management. A small-scale experiment on residual household waste was performed in a large french urban area, in partnership with local authorities, as part of the development of larger-scale project. A two-step door-to-door recruitment scheme led to 85 households answering a questionnaire. Among them, 54 accepted to participate in a study on waste (second step). Participants were then randomly assigned to one of the 3 experimental conditions : self-reported feedback on curbside waste, external feedback on waste weight based on information technologies, and no feedback for the control group. An additional control group was added, including households who were not requested to answer the questionnaire. Household residual waste was collected every week, and tags on curbside bins fed a database with waste weight of households. The feedback period lasted 14 weeks (february-may 2014). Quantitative data on waste weight were analysed, including these 14 weeks and the 7 previous weeks. Households were then contacted by phone in order to confirm the quantitative results. Regarding the recruitment questionnaire, results revealed high pro-environmental attitude on the NEP scale, high recycling behavior level and moderate level of source reduction behavior on the adapted 3R scale, but no statistical difference between the 3 experimental groups. Regarding the feedback manipulation paradigm, waste weight reveals important differences between households, but doesn't prove any statistical difference between the experimental conditions. Qualitative phone interviews confirm that recycling is a current practice among participants, whereas source reduction of waste is not, and mainly appears as a producer problem of packaging limitation. We conclude that triggering waste prevention behaviors among recycling households involves long-term feedback and should promote benchmarking, in order to clearly set waste reduction as an objective to be managed through feedback figures.Keywords: eco-feedback, household waste, waste reduction, experimental research
Procedia PDF Downloads 395307 Comparison of Hydrogen and Electrification Perspectives in Decarbonizing the Transport Sector
Authors: Matteo Nicoli, Gianvito Colucci, Valeria Di Cosmo, Daniele Lerede, Laura Savoldi
Abstract:
The transport sector is currently responsible for approximately 1/3 of greenhouse gas emissions in Europe. In the wider context of achieving carbon neutrality of the global energy system, different alternatives are available to decarbonizethe transport sector. In particular, while electricity is already the most consumed energy commodity in rail transport, battery electric vehicles are one of the zero-emissions options on the market for road transportation. On the other hand, hydrogen-based fuel cell vehicles are available for road and non-road vehicles. The European Commission is strongly pushing toward the integration of hydrogen in the energy systems of European countries and its widespread adoption as an energy vector to achieve the Green Deal targets. Furthermore, the Italian government is defining hydrogen-related objectives with the publication of a dedicated Hydrogen Strategy. The adoption of energy system optimization models to study the possible penetration of alternative zero-emitting transport technologies gives the opportunity to perform an overall analysis of the effects that the development of innovative technologies has on the entire energy system and on the supply-side, devoted to the production of energy carriers such as hydrogen and electricity. Using an open-source modeling framework such as TEMOA, this work aims to compare the role of hydrogen and electric vehicles in the decarbonization of the transport sector. The analysis investigates the advantages and disadvantages of adopting the two options, from the economic point of view (costs associated with the two options) and the environmental one (looking at the emissions reduction perspectives). Moreover, an analysis on the profitability of the investments in hydrogen and electric vehicles will be performed. The study investigates the evolution of energy consumption and greenhouse gas emissions in different transportation modes (road, rail, navigation, and aviation) by detailed analysis of the full range of vehicles included in the techno-economic database used in the TEMOA model instance adopted for this work. The transparency of the analysis is guaranteed by the accessibility of the TEMOA models, based on an open-access source code and databases.Keywords: battery electric vehicles, decarbonization, energy system optimization models, fuel cell vehicles, hydrogen, open-source modeling, TEMOA, transport
Procedia PDF Downloads 114306 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 285305 Experiments to Study the Vapor Bubble Dynamics in Nucleate Pool Boiling
Authors: Parul Goel, Jyeshtharaj B. Joshi, Arun K. Nayak
Abstract:
Nucleate boiling is characterized by the nucleation, growth and departure of the tiny individual vapor bubbles that originate in the cavities or imperfections present in the heating surface. It finds a wide range of applications, e.g. in heat exchangers or steam generators, core cooling in power reactors or rockets, cooling of electronic circuits, owing to its highly efficient transfer of large amount of heat flux over small temperature differences. Hence, it is important to be able to predict the rate of heat transfer and the safety limit heat flux (critical heat flux, heat flux higher than this can lead to damage of the heating surface) applicable for any given system. A large number of experimental and analytical works exist in the literature, and are based on the idea that the knowledge of the bubble dynamics on the microscopic scale can lead to the understanding of the full picture of the boiling heat transfer. However, the existing data in the literature are scattered over various sets of conditions and often in disagreement with each other. The correlations obtained from such data are also limited to the range of conditions they were established for and no single correlation is applicable over a wide range of parameters. More recently, a number of researchers have been trying to remove empiricism in the heat transfer models to arrive at more phenomenological models using extensive numerical simulations; these models require state-of-the-art experimental data for a wide range of conditions, first for input and later, for their validation. With this idea in mind, experiments with sub-cooled and saturated demineralized water have been carried out under atmospheric pressure to study the bubble dynamics- growth rate, departure size and frequencies for nucleate pool boiling. A number of heating elements have been used to study the dependence of vapor bubble dynamics on the heater surface finish and heater geometry along with the experimental conditions like the degree of sub-cooling, super heat and the heat flux. An attempt has been made to compare the data obtained with the existing data and the correlations in the literature to generate an exhaustive database for the pool boiling conditions.Keywords: experiment, boiling, bubbles, bubble dynamics, pool boiling
Procedia PDF Downloads 303304 Structure, Bioinformatics Analysis and Substrate Specificity of a 6-Phospho-β-Glucosidase Glycoside Hydrolase 1 Enzyme from Bacillus licheniformis
Authors: Wayde Veldman, Ozlem T. Bishop, Igor Polikarpov
Abstract:
In bacteria, mono and disaccharides are phosphorylated during uptake into the cell via the widely used phosphoenolpyruvate (PEP)-dependent phosphotransferase transport system. As an initial step in the phosphorylated disaccharide metabolism pathway, certain glycoside hydrolase family 1 (GH1) enzymes play a crucial role in releasing phosphorylated and non-phosphorylated monosaccharides. However, structural determinants for the specificity of these enzymes still need to be clarified. GH1 enzymes are known to have a wide array of functions. According to the CAZy database, there are twenty-one different enzymatic activities in the GH1 family. Here, the structure and substrate specificity of a GH1 enzyme from Bacillus licheniformis, hereafter known as BlBglH, was investigated. The sequence of the enzyme BlBglH was compared to the sequences of other characterized GH1 enzymes using sequence alignment, sequence identity calculations, phylogenetic analysis, and motif discovery. Through these various analyses, BlBglH was found to have sequence features characteristic of the 6-phospho-β-glucosidase activity enzymes. Additionally, motif and structure comparisons of the three most commonly studied GH1 enzyme-activities revealed a shared loop amongst the different structures that consist of different sequence motifs – this loop is thought to guide specific substrates (depending on activity) towards the active-site. To further affirm BlBglH enzyme activity, molecular docking and molecular dynamics simulations were performed. Docking was carried out using 6-phospho-β-glucosidase enzyme-activity positive (p-Nitrophenyl-beta-D-glucoside-6-phosphate) and negative (p-Nitrophenyl-beta-D-galactoside-6-phosphate) control ligands, followed by 400 ns molecular dynamics simulations. The positive-control ligand maintained favourable interactions within the active site until the end of the simulation. The negative-control ligand was observed exiting the enzyme at 287 ns. Binding free energy calculations showed that the positive-control complex had a substantially more favourable binding energy compared to the negative-control complex. Jointly, the findings of this study suggest that the BlBglH enzyme possesses 6-phospho-β-glucosidase enzymatic activity.Keywords: 6-P-β-glucosidase, glycoside hydrolase 1, molecular dynamics, sequence analysis, substrate specificity
Procedia PDF Downloads 132303 Evaluation of the Surveillance System for Rift Valley Fever in Ruminants in Mauritania, 2019
Authors: Mohamed El Kory Yacoub, Ahmed Bezeid El Mamy Beyatt, Djibril Barry, Yanogo Pauline, Nicolas Meda
Abstract:
Introduction: Rift Valley Fever is a zoonotic arbovirosis that severely affects ruminants, as well as humans. It causes abortions in pregnant females and deaths in young animals. The disease occurs during heavy rains followed by large numbers of mosquito vectors. The objective of this work is to evaluate the surveillance system for Rift Valley Fever. Methods: We conducted an evaluation of the Rift Valley Fiver surveillance system. Data were collected from the analysis of the national database of the Mauritanian Network of Animal Disease Epidemiological Surveillance at the Ministry of Rural Development, of RVF cases notified from the whole national territory, of questionnaires and interviews with all persons involved in RVF surveillance at the central level. The quality of the system was assessed by analyzing the quantitative attributes defined by the Centers for Disease Control and Prevention. Results: In 2019, 443 cases of RVF were notified by the surveillance system, of which 36 were positive. Among the notified cases of Rift Valley Fever, the 0- to the 3-year-old age group of small ruminants was the most represented with 49.21% of cases, followed by 33.33%, which was recorded in large ruminants in the 0 to 7-year-old age group, 11.11% of cases were older than seven years. The completeness of the data varied between 14.2% (age) and 100% (species). Most positive cases were recorded between October and November 2019 in seven different regions. Attribute analysis showed that 87% of the respondents were able to use the case definition well, and 78.8% said they were familiar with the reporting and feedback loop of the Rift Valley Fever data. 90.3% of the respondents found it easy, while 95% of them responded that it was easy for them to transmit their data to the next level. Conclusions: The epidemiological surveillance system for Rift Valley Fever in Mauritania is simple and representative. However, data quality, stability, and responsiveness are average, as the diagnosis of the disease requires laboratory confirmation and the average delay for this confirmation is long (13 days). Consequently, the lack of completeness of the recorded data and of description of cases in terms of time-place-animal, associated with the delay between the stages of the surveillance system can make prevention, early detection of epidemics, and the initiation of measures for an adequate response difficult.Keywords: evaluation, epidemiological surveillance system, rift valley fever, mauritania, ruminants
Procedia PDF Downloads 151302 Identification of Lipo-Alkaloids and Fatty Acids in Aconitum carmichaelii Using Liquid Chromatography–Mass Spectrometry and Gas Chromatography–Mass Spectrometry
Authors: Ying Liang, Na Li
Abstract:
Lipo-alkaloid is a kind of C19-norditerpenoid alkaloids existed in Aconitum species, which usually contains an aconitane skeleton and one or two fatty acid residues. The structures are very similar to that of diester-type alkaloids, which are considered as the main bioactive components in Aconitum carmichaelii. They have anti-inflammatory, anti-nociceptive, and anti-proliferative activities. So far, more than 200 lipo-alkaloids were reported from plants, semisynthesis, and biotransformations. In our research, by the combination of ultra-high performance liquid chromatography-quadruple-time of flight mass spectrometry (UHPLC-Q-TOF-MS) and an in-house database, 148 lipo-alkaloids were identified from A. carmichaelii, including 93 potential new compounds and 38 compounds with oxygenated fatty acid moieties. To our knowledge, this is the first time of the reporting of the oxygenated fatty acids as the side chains in naturally-occurring lipo-alkaloids. Considering the fatty acid residues in lipo-alkaloids should come from the free acids in the plant, the fatty acids and their relationship with lipo-alkaloids were further investigated by GC-MS and LC-MS. Among 17 fatty acids identified by GC-MS, 12 were detected as the side chains of lipo-alkaloids, which accounted for about 1/3 of total lipo-alkaloids, while these fatty acid residues were less than 1/4 of total fatty acid residues. And, total of 37 fatty acids were determined by UHPCL-Q-TOF-MS, including 18 oxidized fatty acids firstly identified from A. carmichaelii. These fatty acids were observed as the side chains of lipo-alkaloids. In addition, although over 140 lipo-alkaloids were identified, six lipo-alkaloids, 8-O-linoleoyl-14-benzoylmesaconine (1), 8-O-linoleoyl-14-benzoylaconine (2), 8-O-palmitoyl-14-benzoylmesaconine (3), 8-O-oleoyl-14-benzoylmesaconine (4), 8-O-pal-benzoylaconine (5), and 8-O-ole-Benzoylaconine (6), were found to be the main components, which accounted for over 90% content of total lipo-alkaloids. Therefore, using these six components as standards, a UHPLC-Triple Quadrupole-MS (UHPLC-QQQ-MS) approach was established to investigate the influence of processing on the contents of lipo-alkaloids. Although it was commonly supposed that the contents of lipo-alkaloids increased after processing, our research showed that no significant change was observed before and after processing. Using the same methods, the lipo-alkaloids in the lateral roots of A. carmichaelii and the roots of A. kusnezoffii were determined and quantified. The contents of lipo-alkaloids in A. kusnezoffii were close to that of the parent roots of A. carmichaelii, while the lateral roots had less lipo-alkaloids than the parent roots. This work was supported by Macao Science and Technology Development Fund (086/2013/A3 and 003/2016/A1).Keywords: Aconitum carmichaelii, fatty acids, GC-MS, LC-MS, lipo-alkaloids
Procedia PDF Downloads 301