Search results for: Simon Ayorinde Okanlawon
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 205

Search results for: Simon Ayorinde Okanlawon

55 Establishment of Diagnostic Reference Levels for Computed Tomography Examination at the University of Ghana Medical Centre

Authors: Shirazu Issahaku, Isaac Kwesi Acquah, Simon Mensah Amoh, George Nunoo

Abstract:

Introduction: Diagnostic Reference Levels are important indicators for monitoring and optimizing protocol and procedure in medical imaging between facilities and equipment. This helps to evaluate whether, in routine clinical conditions, the median value obtained for a representative group of patients within an agreed range from a specified procedure is unusually high or low for that procedure. This study aimed to propose Diagnostic Reference Levels for Computed Tomography examination of the most common routine examination of the head, chest and abdominal pelvis regions at the University of Ghana Medical Centre. Methods: The Diagnostic Reference Levels were determined based on the investigation of the most common routine examinations, including head Computed Tomography examination with and without contrast, abdominopelvic Computed Tomography examination with and without contrast, and chest Computed Tomography examination without contrast. The study was based on two dose indicators: the volumetric Computed Tomography Dose Index and Dose-Length Product. Results: The estimated median distribution for head Computed Tomography with contrast for volumetric-Computed Tomography dose index and Dose-Length Product were 38.33 mGy and 829.35 mGy.cm, while without contrast, were 38.90 mGy and 860.90 mGy.cm respectively. For an abdominopelvic Computed Tomography examination with contrast, the estimated volumetric-Computed Tomography dose index and Dose-Length Product values were 40.19 mGy and 2096.60 mGy.cm. In the absence of contrast, the calculated values were 14.65 mGy and 800.40 mGy.cm, respectively. Additionally, for chest Computed Tomography examination, the estimated values were 12.75 mGy and 423.95 mGy.cm for volumetric-Computed Tomography dose index and Dose-Length Product, respectively. These median values represent the proposed diagnostic reference values of the head, chest, and abdominal pelvis regions. Conclusions: The proposed Diagnostic Reference Level is comparable to the recommended International Atomic Energy Agency and International Commission Radiation Protection Publication 135 and other regional published data by the European Commission and Regional National Diagnostic Reference Level in Africa. These reference levels will serve as benchmarks to guide clinicians in optimizing radiation dose levels while ensuring accurate diagnostic image quality at the facility.

Keywords: diagnostic reference levels, computed tomography dose index, computed tomography, radiation exposure, dose-length product, radiation protection

Procedia PDF Downloads 13
54 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)

Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin

Abstract:

The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.

Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory

Procedia PDF Downloads 91
53 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 133
52 Public Procurement and Innovation: A Municipal Approach

Authors: M. Moso-Diez, J. L. Moragues-Oregi, K. Simon-Elorz

Abstract:

Innovation procurement is designed to steer the development of solutions towards concrete public sector needs as a driver for innovation from the demand side (in public services as well as in market opportunities for companies), is horizontally emerging as a new policy instrument. In 2014 the new EU public procurement directives 2014/24/EC and 2014/25/EC reinforced the support for Public Procurement for Innovation, dedicating funding instruments that can be used across all areas supported by Horizon 2020, and targeting potential buyers of innovative solutions: groups of public procurers with similar needs. Under this programme, new policy adapters and networks emerge, aiming to embed innovation criteria into new procurement processes. As these initiatives are in process, research related to is scarce. We argue that Innovation Public Procurement can arise as an innovative policy instrument to public procurement in different policy domains, in spite of existing institutional and cultural barriers (legal guarantee versus innovation). The presentation combines insights from public procurement to supply management chain management in a sustainability and innovation policy arena, as a means of providing understanding of: (1) the circumstances that emerge; (2) the relationship between public and private actors; and (3) the emerging capacities in the definition of the agenda. The policy adopters are the contracting authorities that mainly are at municipal level where they interact with the supply management chain, interconnecting sustainability and climate measures with other policy priorities such as innovation and urban planning; and through the Competitive Dialogue procedure. We found that geography and territory affect both the level of municipal budget (due to municipal income per capita) and its institutional competencies (due to demographic reasons). In spite of the relevance of institutional determinants for public procurement, other factors play an important role such as human factors as well as both public policy and private intervention. The experience is a ‘city project’ (Bilbao) in the field of brownfield decontamination. Brownfield sites typically refer to abandoned or underused industrial and commercial properties—such as old process plants, mining sites, and landfills—that are available but contain low levels of environmental contaminants that may complicate reuse or redevelopment of the land. This article concludes that Innovation Public Procurement in sustainability and climate issues should be further developed both as a policy instrument and as a policy research line that could enable further relevant changes in public procurement as well as in climate innovation.

Keywords: innovation, city projects, public policy, public procurement

Procedia PDF Downloads 296
51 Reducing Flood Risk through Value Capture and Risk Communication: A Case Study in Cocody-Abidjan

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

Abidjan city (Republic of Ivory Coast) is an emerging megacity and an urban coastal area where the number of floods reported is on a rapid increase due to climate change and unplanned urbanization. However, comprehensive disaster mitigation plans, policies, and financial resources are still lacking as the population ignores the extent and location of the flood zones; making them unprepared to mitigate the damages. Considering the existing condition, this paper aims to discuss an approach for flood risk reduction in Cocody Commune through value capture strategy and flood risk communication. Using geospatial techniques and hydrological simulation, we start our study by delineating flood zones and depths under several return periods in the study area. Then, through a questionnaire a field survey is conducted in order to validate the flood maps, to estimate the flood risk and to collect some sample of the opinion of residents on how the flood risk information disclosure could affect the values of property located inside and outside the flood zones. The results indicate that the study area is highly vulnerable to 5-year floods and more, which can cause serious harm to human lives and to properties as demonstrated by the extent of the 5-year flood of 2014. Also, it is revealed there is a high probability that the values of property located within flood zones could decline, and the values of surrounding property in the safe area could increase when risk information disclosure commences. However in order to raise public awareness of flood disaster and to prevent future housing promotion in high-risk prospective areas, flood risk information should be disseminated through the establishment of an early warning system. In order to reduce the effect of risk information disclosure and to protect the values of property within the high-risk zone, we propose that property tax increments in flood free zones should be captured and be utilized for infrastructure development and to maintain the early warning system that will benefit people living in flood prone areas. Through this case study, it is shown that combination of value capture strategy and risk communication could be an effective tool to educate citizen and to invest in flood risk reduction in emerging countries.

Keywords: Cocody-Abidjan, flood, geospatial techniques, risk communication, value capture

Procedia PDF Downloads 259
50 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 36
49 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 170
48 Developing an Intervention Program to Promote Healthy Eating in a Catering System Based on Qualitative Research Results

Authors: O. Katz-Shufan, T. Simon-Tuval, L. Sabag, L. Granek, D. R. Shahar

Abstract:

Meals provided at catering systems are a common source of workers' nutrition and were found as contributing high amounts calories and fat. Thus, eating daily catering food can lead to overweight and chronic diseases. On the other hand, the institutional dining room may be an ideal environment for implementation of intervention programs that promote healthy eating. This may improve diners' lifestyle and reduce their prevalence of overweight, obesity and chronic diseases. The significance of this study is in developing an intervention program based on the diners’ dietary habits, preferences and their attitudes towards various intervention programs. In addition, a successful catering-based intervention program may have a significant effect simultaneously on a large group of diners, leading to improved nutrition, healthier lifestyle, and disease-prevention on a large scale. In order to develop the intervention program, we conducted a qualitative study. We interviewed 13 diners who eat regularly at catering systems, using a semi-structured interview. The interviews were recorded, transcribed and then analyzed by the thematic method, which identifies, analyzes and reports themes within the data. The interviews revealed several major themes, including expectation of diners to be provided with healthy food choices; their request for nutrition-expert involvement in planning the meals; the diners' feel that there is a conflict between sensory attractiveness of the food and its' nutritional quality. In the context of the catering-based intervention programs, the diners prefer scientific and clear messages focusing on labeling healthy dishes only, as opposed to the labeling of unhealthy dishes; they were interested in a nutritional education program to accompany the intervention program. Based on these findings, we have developed an intervention program that includes: changes in food served such as replacing several menu items and nutritional improvement of some of the recipes; as well as, environmental changes such as changing the location of some food items presented on the buffet, placing positive nutritional labels on healthy dishes and an ongoing healthy nutrition campaign, all accompanied by a nutrition education program. The intervention program is currently being tested for its impact on health outcomes and its cost-effectiveness.

Keywords: catering system, food services, intervention, nutrition policy, public health, qualitative research

Procedia PDF Downloads 183
47 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 315
46 Evaluation of Dry Matter Yield of Panicum maximum Intercropped with Pigeonpea and Sesbania Sesban

Authors: Misheck Musokwa, Paramu Mafongoya, Simon Lorentz

Abstract:

Seasonal shortages of fodder during the dry season is a major constraint to smallholder livestock farmers in South Africa. To mitigate the shortage of fodder, legume trees can be intercropped with pastures which can diversify the sources of feed and increase the amount of protein for grazing animals. The objective was to evaluate dry matter yield of Panicum maximum and land productivity under different fodder production systems during 2016/17-2017/18 seasons at Empangeni (28.6391° S and 31.9400° E). A randomized complete block design, replicated three times was used, the treatments were sole Panicum maximum, Panicum maximum + Sesbania sesban, Panicum maximum + pigeonpea, sole Sesbania sesban, Sole pigeonpea. Three months S.sesbania seedlings were transplanted whilst pigeonpea was direct seeded at spacing of 1m x 1m. P. maximum seeds were drilled at a respective rate of 7.5 kg/ha having an inter-row spacing of 0.25 m apart. In between rows of trees P. maximum seeds were drilled. The dry matter yield harvesting times were separated by six months’ timeframe. A 0.25 m² quadrant randomly placed on 3 points on the plot was used as sampling area during harvesting P. maximum. There was significant difference P < 0.05 across 3 harvests and total dry matter. P. maximum had higher dry matter yield as compared to both intercrops at first harvest and total. The second and third harvest had no significant difference with pigeonpea intercrop. The results was in this order for all 3 harvest: P. maximum (541.2c, 1209.3b and 1557b) kg ha¹ ≥ P. maximum + pigeonpea (157.2b, 926.7b and 1129b) kg ha¹ > P. maximum + S. sesban (36.3a, 282a and 555a) kg ha¹. Total accumulation of dry matter yield of P. maximum (3307c kg ha¹) > P. maximum + pigeonpea (2212 kg ha¹) ≥ P. maximum + S. sesban (874 kg ha¹). There was a significant difference (P< 0.05) on seed yield for trees. Pigeonpea (1240.3 kg ha¹) ≥ Pigeonpea + P. maximum (862.7 kg ha¹) > S.sesbania (391.9 kg ha¹) ≥ S.sesbania + P. maximum. The Land Equivalent Ratio (LER) was in the following order P. maximum + pigeonpea (1.37) > P. maximum + S. sesban (0.84) > Pigeonpea (0.59) ≥ S. Sesbania (0.57) > P. maximum (0.26). Results indicates that it is beneficial to have P. maximum intercropped with pigeonpea because of higher land productivity. Planting grass with pigeonpea was more beneficial than S. sesban with grass or sole cropping in terms of saving the shortage of arable land. P. maximum + pigeonpea saves a substantial (37%) land which can be subsequently be used for other crop production. Pigeonpea is recommended as an intercrop with P. maximum due to its higher LER and combined production of livestock feed, human food, and firewood. Panicum grass is low in crude protein though high in carbohydrates, there is a need for intercropping it with legume trees. A farmer who buys concentrates can reduce costs by combining P. maximum with pigeonpea this will provide a balanced diet at low cost.

Keywords: fodder, livestock, productivity, smallholder farmers

Procedia PDF Downloads 138
45 Velma-ARC’s Rehabilitation of Repentant Cybercriminals in Nigeria

Authors: Umukoro Omonigho Simon, Ashaolu David ‘Diya, Aroyewun-Olaleye Temitope Folashade

Abstract:

The VELMA Action to Reduce Cybercrime (ARC) is an initiative, the first of its kind in Nigeria, designed to identify, rehabilitate and empower repentant cybercrime offenders popularly known as ‘yahoo boys’ in Nigerian parlance. Velma ARC provides social inclusion boot camps with the goal of rehabilitating cybercriminals via psychotherapeutic interventions, improving their IT skills, and empowering them to make constructive contributions to society. This report highlights the psychological interventions provided for participants of the maiden edition of the Velma ARC boot camp and presents the outcomes of these interventions. The boot camp was set up in a hotel premises which was booked solely for the 1 month event. The participants were selected and invited via the Velma online recruitment portal based on an objective double-blind selection process from a pool of potential participants who signified interest via the registration portal. The participants were first taken through psychological profiling (personality, symptomology and psychopathology) before the individual and group sessions began. They were profiled using the Minnesota Multiphasic Personality Inventory -2- Restructured Form (MMPI-2-RF), the latest version of its series. Individual psychotherapy sessions were conducted for all participants based on what was interpreted on their profiles. Focus group discussion was held later to discuss a movie titled ‘catch me if you can’ directed by Steven Spielberg, featuring Leonardo De Caprio and Tom Hanks. The movie was based on the true life story of Frank Abagnale, who was a notorious scammer and con artist in his youthful years. Emergent themes from the movie were discussed as psycho-educative parameters for the participants. The overall evaluation of outcomes from the VELMA ARC rehabilitation boot camp stemmed from a disaggregated assessment of observed changes which are summarized in the final report of the clinical psychologist and was detailed enough to infer genuine repentance and positive change in attitude towards cybercrime among the participants. Follow up services were incorporated to validate initial observations. This gives credence to the potency of the psycho-educative intervention provided during the Velma ARC boot camp. It was recommended that support and collaborations from the government and other agencies/individuals would assist the VELMA foundation in expanding the scope and quality of the Velma ARC initiative as an additional requirement for cybercrime offenders following incarceration.

Keywords: Velma-ARC, cybercrime offenders, rehabilitation, Nigeria

Procedia PDF Downloads 138
44 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 72
43 An Effective Approach to Knowledge Capture in Whole Life Costing in Constructions Project

Authors: Ndibarafinia Young Tobin, Simon Burnett

Abstract:

In spite of the benefits of implementing whole life costing technique as a valuable approach for comparing alternative building designs allowing operational cost benefits to be evaluated against any initial cost increases and also as part of procurement in the construction industry, its adoption has been relatively slow due to the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice, i.e. the lack of professionals in many establishments with knowledge and training on the use of whole life costing technique, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. This has proved to be very challenging to those who showed some willingness to employ the technique in a construction project. The knowledge generated from a project can be considered as best practices learned on how to carry out tasks in a more efficient way, or some negative lessons learned which have led to losses and slowed down the progress of the project and performance. Knowledge management in whole life costing practice can enhance whole life costing analysis execution in a construction project, as lessons learned from one project can be carried on to future projects, resulting in continuous improvement, providing knowledge that can be used in the operation and maintenance phases of an assets life span. Purpose: The purpose of this paper is to report an effective approach which can be utilised in capturing knowledge in whole life costing practice in a construction project. Design/methodology/approach: An extensive literature review was first conducted on the concept of knowledge management and whole life costing. This was followed by a semi-structured interview to explore the existing and good practice knowledge management in whole life costing practice in a construction project. The data gathered from the semi-structured interview was analyzed using content analysis and used to structure an effective knowledge capturing approach. Findings: From the results obtained in the study, it shows that the practice of project review is the common method used in the capturing of knowledge and should be undertaken in an organized and accurate manner, and results should be presented in the form of instructions or in a checklist format, forming short and precise insights. The approach developed advised that irrespective of how effective the approach to knowledge capture, the absence of an environment for sharing knowledge, would render the approach ineffective. Open culture and resources are critical for providing a knowledge sharing setting, and leadership has to sustain whole life costing knowledge capture, giving full support for its implementation. The knowledge capturing approach has been evaluated by practitioners who are experts in the area of whole life costing practice. The results have indicated that the approach to knowledge capture is suitable and efficient.

Keywords: whole life costing, knowledge capture, project review, construction industry, knowledge management

Procedia PDF Downloads 254
42 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg

Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma

Abstract:

The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.

Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES

Procedia PDF Downloads 58
41 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 185
40 Encouraging the Uptake of Entrepreneurship by Graduates of Higher Education Institutions in South Africa

Authors: Chux Gervase Iwu, Simon Nsengimane

Abstract:

Entrepreneurship stimulates socio-economic development in many countries, if not all. It creates jobs and decreases unemployment and inequality. There are other benefits that are accruable from entrepreneurship, namely the empowerment of women and the promotion of better livelihoods. Innovation has become a weapon for business competition, growth, and sustainability. Paradoxically, it is a threat to businesses because products can be duplicated; new products may decrease the market share of existing ones or delete them from the market. This creates a constant competitive environment that calls for updates, innovation, and the invention of new products and services. Thus, the importance of higher education in instilling a good entrepreneurial mindset in students has become even more critical. It can be argued that the business environment is under enormous pressure from several factors, including the fourth industrial revolution, which calls for the adoption and use of information and communication technology, which is the catalyst for many innovations and organisational changes. Therefore, it is crucial that higher education students are equipped with relevant knowledge and skills to respond effectively to the needs of the business environment and create a vibrant entrepreneurship ecosystem. In South Africa, entrepreneurship education or some form of it has been a privilege for economic and management fields of study, leaving behind other fields. Entrepreneurship should not be limited to business faculties but rather extended to other fields of study. This is perhaps the reason for low levels of entrepreneurship uptake among South African graduates if they are compared with the graduates in other countries. There may be other reasons for the low entrepreneurship uptake. Some of these have been documented in extant literature to include (1) not enough time was spent teaching entrepreneurship in the business faculties, (2) the skills components in the curricula are insufficient, and (3) the overall attitudes/mindsets necessary to establish and run sustainable enterprises seem absent. Therefore, four important areas are recognised as crucial for the effective implementation of entrepreneurship education: policy, private sector engagement, curriculum development, and teacher development. The purpose of this research is to better comprehend the views, aspirations, and expectations of students and faculty members to design an entrepreneurial teaching model for higher education institutions. A qualitative method will be used to conduct a purposive interview with undergraduate and graduate students in select higher institutions. Members of faculty will also be included in the sample as well as, where possible, two or more government personnel responsible for higher education policy development. At present, interpretative analysis is proposed for the analysis of the interviews with the support of Atlas Ti. It is hoped that an entrepreneurship education model in the South African context is realised through this study.

Keywords: entrepreneurship education, higher education institution, graduate unemployment, curriculum development

Procedia PDF Downloads 64
39 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing

Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel

Abstract:

There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.

Keywords: climate change, resilience, remote sensing, demographic and health surveys

Procedia PDF Downloads 153
38 Study on Metabolic and Mineral Balance, Oxidative Stress and Cardiovascular Risk Factors in Type 2 Diabetic Patients on Different Therapy

Authors: E. Nemes-Nagy, E. Fogarasi, M. Croitoru, A. Nyárádi, K. Komlódi, S. Pál, A. Kovács, O. Kopácsy, R. Tripon, Z. Fazakas, C. Uzun, Z. Simon-Szabó, V. Balogh-Sămărghițan, E. Ernő Nagy, M. Szabó, M. Tilinca

Abstract:

Intense oxidative stress, increased glycated hemoglobin and mineral imbalance represent risk factors for complications in diabetic patients. Cardiovascular complications are most common in these patients, including nephropathy. This study was conducted in 2015 at the Procardia Laboratory in Tîrgu Mureș, Romania on 40 type 2 diabetic adults. Routine biochemical tests were performed on the Konleab 20XTi analyzer (serum glucose, total cholesterol, LDL and HDL cholesterol, triglyceride, creatinine, urea). We also measured serum uric acid, magnesium and calcium concentration by photometric procedures, potassium, sodium and chloride by ion selective electrode, and chromium by atomic absorption spectrometry in a group of patients. Glycated hemoglobin (HbA1c) dosage was made by reflectometry. Urine analysis was performed using the HandUReader equipment. The level of oxidative stress was measured by serum malondialdehyde dosage using the thiobarbituric acid reactive substances method. MDRD (Modification of Diet in Renal Disease) formula was applied for calculation of creatinine-derived glomerular filtration rate. GraphPad InStat software was used for statistical analysis of the data. The diabetic subject included in the study presented high MDA concentrations, showing intense oxidative stress. Calcium was deficient in 5% of the patients, chromium deficiency was present in 28%. The atherogenic cholesterol fraction was elevated in 13% of the patients. Positive correlation was found between creatinine and MDRD-creatinine values (p<0.0001), 68% of the patients presented increased creatinine values. The majority of the diabetic patients had good control of their diabetes, having optimal HbA1c values, 35% of them presented fasting serum glucose over 120 mg/dl and 18% had glucosuria. Intense oxidative stress and mineral deficiencies can increase the risk of cardiovascular complications in diabetic patients in spite of their good metabolic balance. More than two third of the patients present biochemical signs of nephropathy, cystatin C dosage and microalbuminuria could reveal better the kidney disorder, but glomerular filtration rate calculation formulas are also useful for evaluation of renal function.

Keywords: cardiovascular risk, homocysteine, malondialdehyde, metformin, minerals, type 2 diabetes, vitamin B12

Procedia PDF Downloads 310
37 Significant Influence of Land Use Type on Earthworm Communities but Not on Soil Microbial Respiration in Selected Soils of Hungary

Authors: Tsedekech Gebremeskel Weldmichael, Tamas Szegi, Lubangakene Denish, Ravi Kumar Gangwar, Erika Micheli, Barbara Simon

Abstract:

Following the 1992 Earth Summit in Rio de Janeiro, soil biodiversity has been recognized globally as a crucial player in guaranteeing the functioning of soil and a provider of several ecosystem services essential for human well-being. The microbial fraction of the soil is a vital component of soil fertility as soil microbes play key roles in soil aggregate formation, nutrient cycling, humification, and degradation of pollutants. Soil fauna, such as earthworms, have huge impacts on soil organic matter dynamics, nutrient cycling, and infiltration and distribution of water in the soil. Currently, land-use change has been a global concern as evidence accumulates that it adversely affects soil biodiversity and the associated ecosystem goods and services. In this study, we examined the patterns of soil microbial respiration (SMR) and earthworm (abundance, biomass, and species richness) across three land-use types (grassland, arable land, and forest) in Hungary. The objectives were i) to investigate whether there is a significant difference in SMR and earthworm (abundance, biomass, and species richness) among land-use types. ii) to determine the key soil properties that best predict the variation in SMR and earthworm communities. Soil samples, to a depth of 25 cm, were collected from the surrounding areas of seven soil profiles. For physicochemical parameters, soil organic matter (SOM), pH, CaCO₃, E₄/E₆, available nitrogen (NH₄⁺-N and NO₃⁻-N), potassium (K₂O), phosphorus (P₂O₅), exchangeable Ca²⁺, Mg²⁺, soil moisture content (MC) and bulk density were measured. The analysis of SMR was determined by basal respiration method, and the extraction of earthworms was carried out by hand sorting method as described by ISO guideline. The results showed that there was no statistically significant difference among land-use types in SMR (p > 0.05). However, the highest SMR was observed in grassland soils (11.77 mgCO₂ 50g⁻¹ soil 10 days⁻¹) and lowest in forest soils (8.61 mgCO₂ 50g⁻¹ soil 10 days⁻¹). SMR had strong positive correlations with exchangeable Ca²⁺ (r = 0.80), MC (r = 0.72), and exchangeable Mg²⁺(r = 0.69). We found a pronounced variation in SMR among soil texture classes (p < 0.001), where the highest value in silty clay loam soils and the lowest in sandy soils. This study provides evidence that agricultural activities can negatively influence earthworm communities, in which the arable land had significantly lower earthworm communities compared to forest and grassland respectively. Overall, in our study, land use type had minimal effects on SMR whereas, earthworm communities were profoundly influenced by land-use type particularly agricultural activities related to tillage. Exchangeable Ca²⁺, MC, and texture were found to be the key drivers of the variation in SMR.

Keywords: earthworm community, land use, soil biodiversity, soil microbial respiration, soil property

Procedia PDF Downloads 126
36 Troubling Depictions of Gambian Womanhood in Dayo Forster’s Reading the Ceiling

Authors: A. Wolfe

Abstract:

Dayo Forster’s impressively crafted Reading the Ceiling (2007) enjoys a relatively high profile among Western readers. It is one of only a handful of Gambian novels to be published by an international publisher, Simon and Schuster of London, and was subsequently shortlisted for the Commonwealth Writer’s Best First Book Prize in 2008. It is currently available to US readers in print and as an e-book and has 167 ratings on Goodreads. This paper addresses the possible influence of the book on Western readers’ perception of The Gambia, or Africa in general, through its depiction of the conditions of Gambian women’s lives. Through a close reading of passages and analysis of imagery, intertextuality, and characterization in the book, the paper demonstrates that Forster portrays the culture of The Gambia as oppressively patriarchal and the prospects for young girls who stay in the country as extremely limited. Reading the Ceiling starts on Ayodele’s 18th birthday, the day she has planned to have sex for the first time. Most of the rest of the book is divided into three parts, each following the chain of events that occur after sex with a potential partner. Although Ayodele goes abroad for her education in each of the three scenarios, she ultimately capitulates to the patriarchal politics and demands of marriage and childrearing in The Gambia, settling for relationships with men she does not love, cooking and cleaning for husbands and children, and silencing her own opinions and desires in exchange for the familiar traditions of patriarchal—and, in one case, polygamous—marriage. Each scenario ends with resignation to death, as, after her mother’s funeral, Ayodele admits to herself that she will be next. Forster uses dust and mud imagery throughout the novel to indicate the dinginess of Ayodele’s life as a young woman, and then wife and mother, in The Gambia as well as the inescapability of this life. This depiction of earthen material is also present in the novel’s recounting of an oral tale about a mermaid captured by fishermen, a story that mirrors Ayodele’s ensnarement by traditional marriage customs and gender norms. A review of the fate of other characters in the novel reveals that Ayodele is not the only woman who becomes trapped by the expectations for women in The Gambia, as those who stay in the country end up subservient to their husbands and/or victims of men’s habitual infidelity. It is important to note that Reading the Ceiling is focused on the experiences of a minority—The Gambia’s middle class, Christian urban dwellers with money for education. Regardless of its limited scope, the novel clearly depicts The Gambia as a place where women are simply unable to successfully contend against traditional patriarchal norms. Although this novel evokes vivid imagery of The Gambia through original and compelling descriptions of food preparation, clothing, and scenery, it perhaps does little to challenge stereotypical perceptions of the lives of African women among a Western readership.

Keywords: African literature, commonwealth literature, marriage, stereotypes, women

Procedia PDF Downloads 159
35 Spectrum of Bacteria Causing Oral and Maxillofacial Infections and Their Antibiotic Susceptibility among Patients Attending Muhimbili National Hospital

Authors: Sima E. Rugarabamu, Mecky I. Matee, Elison N. M. Simon

Abstract:

Background: In Tanzania bacteriological studies of etiological agents of oro-facial infections are very limited, and very few have investigated anaerobes. The aim of this study was to determine the spectrum of bacterial agents involved in oral and maxillofacial infections in patients attending Muhimbili National Hospital, Dar-es-salaam, Tanzania. Method: This was a hospital based descriptive cross-sectional study that was conducted in the Department of Oral and Maxillofacial Surgery of the Muhimbili National Hospital in Dar es Salaam, Tanzania from 1st January 2014 to 31st August 2014. Seventy (70) patients with various forms of oral and maxillofacial infections who were recruited for the study. The study participants were interviewed using a prepared questionnaire after getting their consent. Pus aspirate was cultured on Blood agar, Chocolate Agar, MacConkey agar and incubated aerobically at 37°C. Imported blood agar was used for anaerobic culture whereby they were incubated at 37°Cin anaerobic jars in an atmosphere of generated using commercial gas-generating kits in accordance with manufacturer’s instructions. Plates were incubated at 37°C for 24 hours (For aerobic culture and 48 hours for anaerobic cultures). Gram negative rods were identified using API 20E while all other isolates were identified by conventional biochemical tests. Antibiotic sensitivity testing for isolated aerobic and anaerobic bacteria was detected by the disk diffusion, agar dilution and E-test using routine and commercially available antibiotics used to treat oral facial infections. Results: This study comprised of 41 (58.5%) males and 29 (41.5%) females with a mean age of 32 years SD +/-15.1 and a range of 19 to 70 years. A total of 161 bacteria strains were isolated from specimens obtained from 70 patients which were an average of 2.3 isolates per patient. Of these 103 were aerobic organism and 58 were strict anaerobes. A complex mix of strict anaerobes and facultative anaerobes accounted for 87% of all infections.The most frequent aerobes isolated was streptococcus spp 70 (70%) followed by Staphylococcus spp 18 (18%). Other organisms such as Klebsiella spp 4 (4%), Proteus spp 5 (5%) and Pseudomonas spp 2 (2%) were also seen. The anaerobic group was dominated by Prevotella spp 25 (43%) followed by Peptostreptococcus spp 18 (31%); other isolates were Pseudomonas spp 2 (1%), black pigmented Pophyromonas spp 4 (5%), Fusobacterium spp 3 (3%) and Bacteroides spp 5 (8%). Majority of these organisms were sensitive to Amoxicillin (98%), Gentamycin (89%), and Ciprofloxacin (100%). A 40% resistance to metronidazole was observed in Bacteroides spp otherwise this drug and others displayed good activity against anaerobes. Conclusions: Oral and maxillofacial facial infections at Muhimbili National Hospital are mostly caused by streptococcus spp and Prevotella spp. Strict anaerobes accounted for 36% of all isolates. The profile of isolates should assist in selecting empiric therapy for infections of the oral and maxillofacial region. Inclusion of antimicrobial agents against anaerobic bacteria is highly recommended.

Keywords: bacteria, oral and maxillofacial infections, antibiotic susceptibility, Tanzania

Procedia PDF Downloads 320
34 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 246
33 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording

Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen

Abstract:

It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.

Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration

Procedia PDF Downloads 172
32 Stimulation of Nerve Tissue Differentiation and Development Using Scaffold-Based Cell Culture in Bioreactors

Authors: Simon Grossemy, Peggy P. Y. Chan, Pauline M. Doran

Abstract:

Nerve tissue engineering is the main field of research aimed at finding an alternative to autografts as a treatment for nerve injuries. Scaffolds are used as a support to enhance nerve regeneration. In order to successfully design novel scaffolds and in vitro cell culture systems, a deep understanding of the factors affecting nerve regeneration processes is needed. Physical and biological parameters associated with the culture environment have been identified as potentially influential in nerve cell differentiation, including electrical stimulation, exposure to extracellular-matrix (ECM) proteins, dynamic medium conditions and co-culture with glial cells. The mechanisms involved in driving the cell to differentiation in the presence of these factors are poorly understood; the complexity of each of them raises the possibility that they may strongly influence each other. Some questions that arise in investigating nerve regeneration include: What are the best protein coatings to promote neural cell attachment? Is the scaffold design suitable for providing all the required factors combined? What is the influence of dynamic stimulation on cell viability and differentiation? In order to study these effects, scaffolds adaptable to bioreactor culture conditions were designed to allow electrical stimulation of cells exposed to ECM proteins, all within a dynamic medium environment. Gold coatings were used to make the surface of viscose rayon microfiber scaffolds (VRMS) conductive, and poly-L-lysine (PLL) and laminin (LN) surface coatings were used to mimic the ECM environment and allow the attachment of rat PC12 neural cells. The robustness of the coatings was analyzed by surface resistivity measurements, scanning electron microscope (SEM) observation and immunocytochemistry. Cell attachment to protein coatings of PLL, LN and PLL+LN was studied using DNA quantification with Hoechst. The double coating of PLL+LN was selected based on high levels of PC12 cell attachment and the reported advantages of laminin for neural differentiation. The underlying gold coatings were shown to be biocompatible using cell proliferation and live/dead staining assays. Coatings exhibiting stable properties over time under dynamic fluid conditions were developed; indeed, cell attachment and the conductive power of the scaffolds were maintained over 2 weeks of bioreactor operation. These scaffolds are promising research tools for understanding complex neural cell behavior. They have been used to investigate major factors in the physical culture environment that affect nerve cell viability and differentiation, including electrical stimulation, bioreactor hydrodynamic conditions, and combinations of these parameters. The cell and tissue differentiation response was evaluated using DNA quantification, immunocytochemistry, RT-qPCR and functional analyses.

Keywords: bioreactor, electrical stimulation, nerve differentiation, PC12 cells, scaffold

Procedia PDF Downloads 229
31 Measuring Organizational Resiliency for Flood Response in Thailand

Authors: Sudha Arlikatti, Laura Siebeneck, Simon A. Andrew

Abstract:

The objective of this research is to measure organizational resiliency through five attributes namely, rapidity, redundancy, resourcefulness, and robustness and to provide recommendations for resiliency building in flood risk communities. The research was conducted in Thailand following the severe floods of 2011 triggered by Tropical Storm Nock-ten. The floods lasted over eight months starting in June 2011 affecting 65 of the country’s 76 provinces and over 12 million people. Funding from a US National Science Foundation grant was used to collect ephemeral data in rural (Ayutthaya), suburban (Pathum Thani), and urban (Bangkok) provinces of Thailand. Semi-structured face-to-face interviews were conducted in Thai with 44 contacts from public, private, and non-profit organizations including universities, schools, automobile companies, vendors, tourist agencies, monks from temples, faith based organizations, and government agencies. Multiple triangulations were used to analyze the data by identifying selective themes from the qualitative data, validated with quantitative data and news media reports. This helped to obtain a more comprehensive view of how organizations in different geographic settings varied in their understanding of what enhanced or hindered their resilience and consequently their speed and capacities to respond. The findings suggest that the urban province of Bangkok scored highest in resourcefulness, rapidity of response, robustness, and ability to rebound. This is not surprising considering that it is the country’s capital and the seat of government, economic, military and tourism sectors. However, contrary to expectations all 44 respondents noted that the rural province of Ayutthaya was the fastest to recover amongst the three. Its organizations scored high on redundancy and rapidity of response due to the strength of social networks, a flood disaster sub-culture due to annual flooding, and the help provided by monks from and faith based organizations. Organizations in the suburban community of Pathum Thani scored lowest on rapidity of response and resourcefulness due to limited and ambiguous warnings, lack of prior flood experience and controversies that government flood protection works like sandbagging favored the capital city of Bangkok over them. Such a micro-level examination of organizational resilience in rural, suburban and urban areas in a country through mixed methods studies has its merits in getting a nuanced understanding of the importance of disaster subcultures and religious norms for resilience. This can help refocus attention on the strengths of social networks and social capital, for flood mitigation.

Keywords: disaster subculture, flood response, organizational resilience, Thailand floods, religious beliefs and response, social capital and disasters

Procedia PDF Downloads 146
30 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 73
29 A Work-Individual-Family Inquiry on Mental Health and Family Responsibility of Dealers Employed in Macau Gaming Industry

Authors: Tak Mau Simon Chan

Abstract:

While there is growing reflection of the adverse impacts instigated by the flourishing gaming industry on the physical health and job satisfaction of those who work in Macau casinos, there is also a critical void in our understanding of the mental health of croupiers and how casino employment interacts with the family system. From a systemic approach, it would be most effective to examine the ‘dealer issues’ collectively and offer assistance to both the individual dealer and the family system of dealers. Therefore, with the use of a mixed method study design, the levels of anxiety, depression and sleeping quality of a sample of 1124 dealers who are working in Macau casinos have been measured in the present study, and 113 dealers have been interviewed about the impacts of casino employment on their family life. This study presents some very important findings. First, the quantitative study indicates that gender is a significant predictor of depression and anxiety levels, whilst lower income means less quality sleep. The Pearson’s correlation coefficients show that as the Zung Self-rating Anxiety Scale (ZSAS) scores increase, the Zung Self-rating Depression Scale (ZSDS) and Pittsburgh Sleep Quality Index (PSQI) scores will also simultaneously increase. Higher income, therefore, might partly explain for the reason why mothers choose to work in the gaming industry even with shift work involved and a stressful work environment. Second, the findings from the qualitative study show that aside from the positive impacts on family finances, the shift work and job stress to some degree negatively affect family responsibilities and relationships. There are resultant family issues, including missed family activities, and reduced parental care and guidance, marital intimacy, and communication with family members. Despite the mixed views on the gender role differences, the respondents generally agree that female dealers have more family and child-minding responsibilities at home, and thus it is more difficult for them to balance work and family. Consequently, they may be more vulnerable to stress at work. Thirdly, there are interrelationships between work and family, which are based on a systemic inquiry that incorporates work- individual- family. Poor physical and psychological health due to shift work or a harmful work environment could affect not just work performance, but also life at home. Therefore, a few practice points about 1) work-family conflicts in Macau; 2) families-in- transition in Macau; and 3) gender and class sensitivity in Macau; are provided for social workers and family practitioners who will greatly benefit these families, especially whose family members are working in the gaming industry in Macau. It is concluded that in addressing the cultural phenomenon of “dealer’s complex” in Macau, a systemic approach is recommended that addresses both personal psychological needs and family issue of dealers.

Keywords: family, work stress, mental health, Macau, dealers, gaming industry

Procedia PDF Downloads 296
28 An Approach to Addressing Homelessness in Hong Kong: Life Story Approach

Authors: Tak Mau Simon Chan, Ying Chuen Lance Chan

Abstract:

Homelessness has been a popular and controversial debate in Hong Kong, a city which is densely populated and well-known for very expensive housing. The constitution of the homeless as threats to the community and environmental hygiene is ambiguous and debatable in the Hong Kong context. The lack of an intervention model is the critical research gap thus far, aside from the tangible services delivered. The life story approach (LSA), with its unique humanistic orientation, has been well applied in recent decades to depict the needs of various target groups, but not the homeless. It is argued that the life story approach (LSA), which has been employed by health professionals in the landscape of dementia, and health and social care settings, can be used as a reference in the local Chinese context through indigenization. This study, therefore, captures the viewpoints of service providers and users by constructing an indigenous intervention model that refers to the LSA in serving the chronically homeless. By informing 13 social workers and 27 homeless individuals in 8 focus groups whilst 12 homeless individuals have participated in individual in-depth interviews, a framework of LSA in homeless people is proposed. Through thematic analysis, three main themes of their life stories was generated, namely, the family, negative experiences and identity transformation. The three domains solidified framework that not only can be applied to the homeless, but also other disadvantaged groups in the Chinese context. Based on the three domains of family, negative experiences and identity transformation, the model is applied in the daily practices of social workers who help the homeless. The domain of family encompasses familial relationships from the past to the present to the speculated future with ten sub-themes. The domain of negative experiences includes seven sub-themes, with reference to the deviant behavior committed. The last domain, identity transformation, incorporates the awareness and redefining of one’s identity and there are a total of seven sub-themes. The first two domains are important components of personal histories while the third is more of an unknown, exploratory and yet to-be-redefined territory which has a more positive and constructive orientation towards developing one’s identity and life meaning. The longitudinal temporal dimension of moving from the past – present - future enriches the meaning making process, facilitates the integration of life experiences and maintains a more hopeful dialogue. The model is tested and its effectiveness is measured by using qualitative and quantitative methods to affirm the extent that it is relevant to the local context. First, it contributes to providing a clear guideline for social workers who can use the approach as a reference source. Secondly, the framework acts as a new intervention means to address problem saturated stories and the intangible needs of the homeless. Thirdly, the model extends the application to beyond health related issues. Last but not least, the model is highly relevant to the local indigenous context.

Keywords: homeless, indigenous intervention, life story approach, social work practice

Procedia PDF Downloads 283
27 Common Misconceptions around Human Immunodeficiency Virus in Rural Uganda: Establishing the Role for Patient Education Leaflets Using Patient and Staff Surveys

Authors: Sara Qandil, Harriet Bothwell, Lowri Evans, Kevin Jones, Simon Collin

Abstract:

Background: Uganda suffers from high rates of HIV. Misconceptions around HIV are known to be prevalent in Sub-Saharan Africa (SSA). Two of the most common misconceptions in Uganda are that HIV can be transmitted by mosquito bites or from sharing food. The aim of this project was to establish the local misconceptions around HIV in a Central Ugandan population, and identify if there is a role for patient education leaflets. This project was undertaken as a student selected component (SSC) offered by Swindon Academy, based at the Great Western Hospital, to medical students in their fourth year of the undergraduate programme. Methods: The study was conducted at Villa Maria Hospital; a private, rural hospital in Kalungu District, Central Uganda. 36 patients, 23 from the hospital clinic and 13 from the community were interviewed regarding their understanding of HIV and by what channels they had obtained this understanding. Interviews were conducted using local student nurses as translators. Verbal responses were translated and then transcribed by the researcher. The same 36 patients then undertook a 'misconception' test consisting of 35 questions. Quantitative data was analysed using descriptive statistics and results were scored based on three components of 'transmission knowledge', 'prevention knowledge' and 'misconception rejection'. Each correct response to a question was scored one point, otherwise zero e.g. correctly rejecting a misconception scored one point, but answering ‘yes’ or ‘don’t know’ scored zero. Scores ≤ 27 (the average score) were classified as having ‘poor understanding’. Mean scores were compared between participants seen at the HIV clinic and in the community, and p-values (including Fisher’s exact test) were calculated using Stata 2015. Level of significance was set at 0.05. Interviews with 7 members of staff working in the HIV clinic were undertaken to establish what methods of communication are used to educate patients. Interviews were transcribed and thematic analysis undertaken. Results: The commonest misconceptions which failed to be rejected included transmission of HIV by kissing (78%), mosquitoes (69%) and touching (36%). 33% believed HIV may be prevented by praying. The overall mean scores for transmission knowledge (87.5%) and prevention knowledge (81.1%) were better than misconception rejection scores (69.3%). HIV clinic respondents did tend to have higher scores, i.e. fewer misconceptions, although there was statistical evidence of a significant difference only for prevention knowledge (p=0.03). Analysis of the qualitative data is ongoing but several patients expressed concerns about not being able to read and therefore leaflets not having a helpful role. Conclusions: Results from this paper identified that a high proportion of the population studied held misconceptions about HIV. Qualitative data suggests that there may be a role for patient education leaflets, if pictorial-based and suitable for those with low literacy skill.

Keywords: HIV, human immunodeficiency virus, misconceptions, patient education, Sub-Saharan Africa, Uganda

Procedia PDF Downloads 243
26 Iron Oxide Reduction Using Solar Concentration and Carbon-Free Reducers

Authors: Bastien Sanglard, Simon Cayez, Guillaume Viau, Thomas Blon, Julian Carrey, Sébastien Lachaize

Abstract:

The need to develop clean production processes is a key challenge of any industry. Steel and iron industries are particularly concerned since they emit 6.8% of global anthropogenic greenhouse gas emissions. One key step of the process is the high-temperature reduction of iron ore using coke, leading to large amounts of CO2 emissions. One route to decrease impacts is to get rid of fossil fuels by changing both the heat source and the reducer. The present work aims at investigating experimentally the possibility to use concentrated solar energy and carbon-free reducing agents. Two sets of experimentations were realized. First, in situ X-ray diffraction on pure and industrial powder of hematite was realized to study the phase evolution as a function of temperature during reduction under hydrogen and ammonia. Secondly, experiments were performed on industrial iron ore pellets, which were reduced by NH3 or H2 into a “solar furnace” composed of a controllable 1600W Xenon lamp to simulate and control the solar concentrated irradiation of a glass reactor and of a diaphragm to control light flux. Temperature and pressure were recorded during each experiment via thermocouples and pressure sensors. The percentage of iron oxide converted to iron (called thereafter “reduction ratio”) was found through Rietveld refinement. The power of the light source and the reduction time were varied. Results obtained in the diffractometer reaction chamber show that iron begins to form at 300°C with pure Fe2O3 powder and 400°C with industrial iron ore when maintained at this temperature for 60 minutes and 80 minutes, respectively. Magnetite and wuestite are detected on both powders during the reduction under hydrogen; under ammonia, iron nitride is also detected for temperatures between400°C and 600°C. All the iron oxide was converted to iron for a reaction of 60 min at 500°C, whereas a conversion ratio of 96% was reached with industrial powder for a reaction of 240 min at 600°C under hydrogen. Under ammonia, full conversion was also reached after 240 min of reduction at 600 °C. For experimentations into the solar furnace with iron ore pellets, the lamp power and the shutter opening were varied. An 83.2% conversion ratio was obtained with a light power of 67 W/cm2 without turning over the pellets. Nevertheless, under the same conditions, turning over the pellets in the middle of the experiment permits to reach a conversion ratio of 86.4%. A reduction ratio of 95% was reached with an exposure of 16 min by turning over pellets at half time with a flux of 169W/cm2. Similar or slightly better results were obtained under an ammonia reducing atmosphere. Under the same flux, the highest reduction yield of 97.3% was obtained under ammonia after 28 minutes of exposure. The chemical reaction itself, including the solar heat source, does not produce any greenhouse gases, so solar metallurgy represents a serious way to reduce greenhouse gas emission of metallurgy industry. Nevertheless, the ecological impact of the reducers must be investigated, which will be done in future work.

Keywords: solar concentration, metallurgy, ammonia, hydrogen, sustainability

Procedia PDF Downloads 127