Search results for: the difficult post-earthquake reconstruction in Italy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3546

Search results for: the difficult post-earthquake reconstruction in Italy

1566 Influences of Slope Inclination on the Storage Capacity and Stability of Municipal Solid Waste Landfills

Authors: Feten Chihi, Gabriella Varga

Abstract:

The world's most prevalent waste management strategy is landfills. However, it grew more difficult due to a lack of acceptable waste sites. In order to develop larger landfills and extend their lifespan, the purpose of this article is to expand the capacity of the construction by varying the slope's inclination and to examine its effect on the safety factor. The capacity change with tilt is mathematically determined. Using a new probabilistic calculation method that takes into account the heterogeneity of waste layers, the safety factor for various slope angles is examined. To assess the effect of slope variation on the overall safety of landfills, over a hundred computations were performed for each angle. It has been shown that capacity increases significantly with increasing inclination. Passing from 1:3 to 2:3 slope angles and from 1:3 to 1:2 slope angles, the volume of garbage that can be deposited increases by 40 percent and 25 percent, respectively, of the initial volume. The results of the safety factor indicate that slopes of 1:3 and 1:2 are safe when the standard method (homogenous waste) is used for computation. Using the new approaches, a slope with an inclination of 2:3 can be deemed safe, despite the fact that the calculation does not account for the safety-enhancing effect of daily cover layers. Based on the study reported in this paper, the malty layered nonhomogeneous calculating technique better characterizes the safety factor. As it more closely resembles the actual state of landfills, the employed technique allows for more flexibility in design parameters. This work represents a substantial advance in limiting both safe and economical landfills.

Keywords: landfill, municipal solid waste, slope inclination, capacity, safety factor

Procedia PDF Downloads 186
1565 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 70
1564 Requirements for a Shared Management of State-Owned Building in the Archaeological Park of Pompeii

Authors: Maria Giovanna Pacifico

Abstract:

Maintenance, in Italy, is not yet a consolidated practice despite the benefits that could come from. Among the main reasons, there are the lack of financial resources and personnel in the public administration and a general lack of knowledge about how to activate and to manage a prevented and programmed maintenance. The experimentation suggests that users and tourists could be involved in the maintenance process from the knowledge phase to the monitoring ones by using mobile devices. The goal is to increase the quality of Facility Management for cultural heritage, prioritizing usage needs, and limiting interference between the key stakeholders. The method simplifies the consolidated procedures for the Information Systems, avoiding a loss in terms of quality and amount of information by focusing on the users' requirements: management economy, user safety, accessibility, and by receiving feedback information to define a framework that will lead to predictive maintenance. This proposal was designed to be tested in the Archaeological Park of Pompeii on the state property asset.

Keywords: asset maintenance, key stakeholders, Pompeii, user requirement

Procedia PDF Downloads 125
1563 Fatigue Crack Behaviour in a Residual Stress Field at Fillet Welds in Ship Structures

Authors: Anurag Niranjan, Michael Fitzpatrick, Yin Jin Janin, Jazeel Chukkan, Niall Smyth

Abstract:

Fillet welds are used in joining longitudinal stiffeners in ship structures. Welding residual stresses in fillet welds are generally distributed in a non-uniform manner, as shown in previous research the residual stress redistribution occurs under the cyclic loading that is experienced by such joints during service, and the combination of the initial residual stress, local constraints, and loading can alter the stress field in ways that are extremely difficult to predict. As the residual stress influences the crack propagation originating from the toe of the fillet welds, full understanding of the residual stress field and how it evolves is very important for structural integrity calculations. Knowledge of the residual stress redistribution in the presence of a flaw is therefore required for better fatigue life prediction. Moreover, defect assessment procedures such as BS7910 offer very limited guidance for flaw acceptance and the associated residual stress redistribution in the assessment of fillet welds. Therefore the objective of this work is to study a surface-breaking flaw at the weld toe region in a fillet weld under cyclic load, in conjunction with residual stress measurement at pre-defined crack depths. This work will provide details of residual stress redistribution under cyclic load in the presence of a crack. The outcome of this project will inform integrity assessment with respect to the treatment of residual stress in fillet welds. Knowledge of the residual stress evolution for this weld geometry will be greatly beneficial for flaw tolerance assessments (BS 7910, API 591).

Keywords: fillet weld, fatigue, residual stress, structure integrity

Procedia PDF Downloads 142
1562 An Application of Hip Arthroscopy after Acute Injury - A Case Report

Authors: Le Nguyen Binh, Luong Xuan Binh, Le Van Tuan, Tran Binh Duong, Truong Nguyen Khanh Hung, Do Le Hoang Son, Pham Quang Vinh, Hoang Quoc Huy, Nguyen Bach, Nguyen Quoc Khanh Le, Jiunn Horng Kang

Abstract:

Introduction: Traumatic hip dislocation is an emergency in young adult which can cause avascular necrosis of femoral head or osteoarthritis of hip joint. The reasons for these may be the loose body of bony or chondral fragments, which are difficult to be detected on CT scan or MRI. In those cases, Hip arthroscopy may be the method of choice for diagnosis and treatment of loose bodies in hip joint after traumatic dislocation. Methods: A case report is performed. A 55-year-old male patient was under hip arthroscopy to retrieve the loose body in the right hip joint. Results: The patient’s hip was reduced under anesthesia in the opeation room. Xray and CT scan post-reduction showed that his right hip was wide and a small fragment of femoral head (< 5mm) locking inside the joint. A hip arthroscopy was done to take the fragment out. Post-operation, the patient went under rehabilition. After 6 months, he can walk with full-weight bearing; no further dislocaion was noted, and the Harris score was 84 points. Conclusions: Although acute traumatic injury of hip joint is usually treated with open surgeries, these methods have many drawbacks, such as soft tissue destruction, blood-loss,….Despite its technical requirement, hip arthroscopy is less invasive and effective treatment. Therefore, it may be an alternative treatment for a traumatic hip injury and can be applied frequently in the near future.

Keywords: hip dislocation, hip arthroscopy, hip osteoarthritis, acute hip trauma

Procedia PDF Downloads 86
1561 Stimulating Policy for Attracting Foreign Direct Investment in Georgia

Authors: G. Erkomaishvili, M. Kobalava, T. Lazariashvili, N. Damenia

Abstract:

Current state of foreign direct investment (FDI) in Georgia is analyzed and evaluated in the paper, the existing legislative background for regulating investments and stimulating policies to attract investments are shown. It is noted that in developing countries encouragement of investment activity, support and implementation are of the most important tasks, implying a consistent investment policy, investor-friendly tax regime and the legal system, reducing administrative barriers and restrictions, fare competitive conditions and business development infrastructure. The work deals with the determining factor of FDIs and the main directions of stimulation, as well as prospective industries where new investments are needed. Contributing and hindering factors and stimulating measures are analyzed. As a result of the research, the direct and indirect factors attracting FDI have been identified. Facilitating factors to FDI inflow are as follows: simplicity of starting business, geopolitical location, low taxes, access to credit, ease of ownership registration, natural resources, low burden of regulations, low level of corruption and low crime rates. Hindering factors to FDI inflow are as follows: small market, lack of policy for attracting investments, low qualification of the workforce (despite the large number of unemployed people it is difficult to find workers with necessary special skills and qualifications), high interest rates, instability of national currency exchange rate, presence of conflict zones within the country and so forth.

Keywords: foreign direct investment, investor, investment attracting marketing policies, reinvestment

Procedia PDF Downloads 258
1560 Biodegrading Potentials of Plant Growth - Promoting Bacteria on Insecticides Used in Agricultural Soil

Authors: Chioma Nwakanma, Onyeka Okoh Irene, Emmanuel Eze

Abstract:

Pesticide residues left in agricultural soils after cropping are always accumulative, difficult to degrade and harmful to animals, plants, soil and human health in general. The biodegrading potential of pesticides- resistant PGPB on soil pollution was investigated using in situ remediation technique following recommended standards. In addition, screening for insecticide utilization, maximum insecticide concentration tolerance, insecticide biodegradation and insecticide residues analyses via gas chromatographic/electron column detector were determined. The location of bacterial degradation genes was also determined. Three plant growth-promoting rhizophere (PGPR) were isolated and identified according to 16S rRNA as Paraburkholderia tropica, Burkolderia glumae and Achromobacter insolitus. From the results, all the three isolates showed phosphate solubilizing traits and were able to grow on nitrogen free medium. The isolates were able to utilize the insecticide as sole carbon source and increase in biomass. They were statistically significantly tolerant to all the insecticide concentrations screened. The gas chromatographic profiles of the insecticide residues showed a reduction in the peak areas of the insecticides, indicating degradation. The bacterial consortium had the lowest peak areas, showing the highest degradation efficiency. The genes responsible for degradation were found to be in the plasmids of the isolates. Therefore, the use of PGPR is recommended for bioremediation of agricultural soil insecticide polluted areas and can also enhance soil fertility.

Keywords: biodegradation, rhizosphere, insecticides utilization, agricultural soil

Procedia PDF Downloads 114
1559 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 302
1558 Pharmacokinetic and Tissue Distribution of Etoposide Loaded Modified Glycol Chitosan Nanoparticles

Authors: Akhtar Aman, Abida Raza, Shumaila Bashir, Mehboob Alam

Abstract:

The development of efficient delivery systems remains a major concern in cancer chemotherapy as many efficacious anticancer drugs are hydrophobic and difficult to formulate. Nanomedicines based on drug-loaded amphiphilic glycol chitosan micelles offer potential advantages for the formulation of drugs such as etoposide that may improve the pharmacokinetics and reduce the formulation-related adverse effects observed with current formulations. Amphiphilic derivatives of glycol chitosan were synthesized by chemical grafting of palmitic acid N-hydroxysuccinimide and quaternization to glycol chitosan backbone. To this end, a 7.9 kDa glycol chitosan was modified by palmitoylation and quaternization, yielding a 13 kDa amphiphilic polymer. Micelles prepared from this amphiphilic polymer had a size of 162nm and were able to encapsulate up to 3 mg/ml etoposide. Pharmacokinetic results indicated that the GCPQ micelles transformed the biodistribution pattern and increased etoposide concentration in the brain significantly compared to free drugs after intravenous administration. AUC 0.5-24h showed statistically significant difference in ETP-GCPQ vs. Commercial preparation in liver (25 vs.70, p<0.001), spleen (27 vs.36, p<0.05), lungs (42 vs.136,p<0.001),kidneys(25 vs.70,p< 0.05),and brain(19 vs.9,p<0.001). ETP-GCPQ crossed the blood-brain barrier, and 4, 3.5, 2.6, 1.8, 1.7, 1.5, and 2.5 fold higher levels of etoposide were observed at 0.5, 1, 2, 4, 6, 12, and 24hrs; respectively suggesting these systems could deliver hydrophobic anticancer drugs such as etoposide to tumors but also increased their transport through the biological barriers, thus making it a good delivery system

Keywords: glycol chitosan, micelles, pharmacokinetics, tissue distribution

Procedia PDF Downloads 104
1557 Correlation Between Ore Mineralogy and the Dissolution Behavior of K-Feldspar

Authors: Adrian Keith Caamino, Sina Shakibania, Lena Sunqvist-Öqvist, Jan Rosenkranz, Yousef Ghorbani

Abstract:

Feldspar minerals are one of the main components of the earth’s crust. They are tectosilicate, meaning that they mainly contain aluminum and silicon. Besides aluminum and silicon, they contain either potassium, sodium, or calcium. Accordingly, feldspar minerals are categorized into three main groups: K-feldspar, Na-feldspar, and Ca-feldspar. In recent years, the trend to use K-feldspar has grown tremendously, considering its potential to produce potash and alumina. However, the feldspar minerals, in general, are difficult to decompose for the dissolution of their metallic components. Several methods, including intensive milling, leaching under elevated pressure and temperature, thermal pretreatment, and the use of corrosive leaching reagents, have been proposed to improve its low dissolving efficiency. In this study, as part of the POTASSIAL EU project, to overcome the low dissolution efficiency of the K-feldspar components, mechanical activation using intensive milling followed by leaching using hydrochloric acid (HCl) was practiced. Grinding operational parameters, namely time, rotational speed, and ball-to-sample weight ratio, were studied using the Taguchi optimization method. Then, the mineralogy of the grinded samples was analyzed using a scanning electron microscope (SEM) equipped with automated quantitative mineralogy. After grinding, the prepared samples were subjected to HCl leaching. In the end, the dissolution efficiency of the main elements and impurities of different samples were correlated to the mineralogical characterization results. K-feldspar component dissolution is correlated with ore mineralogy, which provides insight into how to best optimize leaching conditions for selective dissolution. Further, it will have an effect on purifying steps taken afterward and the final value recovery procedures

Keywords: K-feldspar, grinding, automated mineralogy, impurity, leaching

Procedia PDF Downloads 76
1556 Rapid Separation of Biomolecules and Neutral Analytes with a Cationic Stationary Phase by Capillary Electrochromatography

Authors: A. Aslihan Gokaltun, Ali Tuncel

Abstract:

The unique properties of capillary electrochromatography (CEC) such as high performance, high selectivity, low consumption of both reagents and analytes ensure this technique an attractive one for the separation of biomolecules including nucleosides and nucleotides, peptides, proteins, carbohydrates. Monoliths have become a well-established separation media for CEC in the format that can be compared to a single large 'particle' that does not include interparticular voids. Convective flow through the pores of monolith significantly accelerates the rate of mass transfer and enables a substantial increase in the speed of the separation. In this work, we propose a new approach for the preparation of cationic monolithic stationary phase for capillary electrochromatography. Instead of utilizing a charge bearing monomer during polymerization, the desired charge-bearing group is generated on the capillary monolith after polymerization by using the reactive moiety of the monolithic support via one-pot, simple reaction. Optimized monolithic column compensates the disadvantages of frequently used reversed phases, which are difficult for separation of polar solutes. Rapid separation and high column efficiencies are achieved for the separation of neutral analytes, nucleic acid bases and nucleosides in reversed phase mode. Capillary monolith showed satisfactory hydrodynamic permeability and mechanical stability with relative standard deviation (RSD) values below 2 %. A new promising, reactive support that has a 'ligand selection flexibility' due to its reactive functionality represent a new family of separation media for CEC.

Keywords: biomolecules, capillary electrochromatography, cationic monolith, neutral analytes

Procedia PDF Downloads 212
1555 Rheological and Computational Analysis of Crude Oil Transportation

Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh

Abstract:

Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.

Keywords: surfactant, natural, crude oil, rheology, CFD, viscosity

Procedia PDF Downloads 455
1554 Analysis of a Multiejector Cooling System in a Truck at Different Loads

Authors: Leonardo E. Pacheco, Carlos A. Díaz

Abstract:

An alternative way of addressing the difficult to recover the useless heat is through an ejector refrigeration cycle for vehicles applications. A group of thermo-compressor supply the mechanical compressor function at conventional refrigeration compression system. The thermo-compressor group recovers the thermal energy from waste streams (exhaust gases product in internal combustion motors, gases burned in wellhead among others) to eliminate the power consumption of the mechanical compressor. These types of alternative cooling system (air-conditioners) present a kind of advantages in both the increase in energy efficiency and the improvement of the COP of the system being studied from their its mechanical simplicity (decrease of moving parts). An ejector refrigeration cycle represents a significant step forward in the optimization of the efficient use of energy in the process of air conditioning and an alternative to reduce the environmental impacts. On one side, with the energy recycling decreases the temperature of the gases thrown into the atmosphere, which contributes to the principal beneficiaries of the average temperature of the planet. In parallel, mitigating the environmental impact caused by the production and handling of conventional cooling fluids commonly available in the market, causing the destruction of the ozone layer. This work had studied the operation of the multiejector cooling system for a truck with a 420 HP engine at different rotation speed. The operation condition limits and the COP of multi-ejector cooling systems applied in a truck are analyzed for a variable rpm range from to 800–1800 rpm.

Keywords: ejector system, exhaust gas, multiejector cooling system, recovery energy

Procedia PDF Downloads 260
1553 Breaking the Barrier of Service Hostility: A Lean Approach to Achieve Operational Excellence

Authors: Mofizul Islam Awwal

Abstract:

Due to globalization, industries are rapidly growing throughout the world which leads to many manufacturing organizations. But recently, service industries are beginning to emerge in large numbers almost in all parts of the world including some developing countries. In this context, organizations need to have strong competitive advantage over their rivals to achieve their strategic business goals. Manufacturing industries are adopting many methods and techniques in order to achieve such competitive edge. Over the last decades, manufacturing industries have been successfully practicing lean concept to optimize their production lines. Due to its huge success in manufacturing context, lean has made its way into the service industry. Very little importance has been addressed to service in the area of operations management. Service industries are far behind than manufacturing industries in terms of operations improvement. It will be a hectic job to transfer the lean concept from production floor to service back/front office which will obviously yield possible improvement. Service processes are not as visible as production processes and can be very complex. Lack of research in this area made it quite difficult for service industries as there are no standardized frameworks for successfully implementing lean concept in service organization. The purpose of this research paper is to capture the present scenario of service industry in terms of lean implementation. Thorough analysis of past literature will be done on the applicability and understanding of lean in service structure. Classification of research papers will be done and critical factors will be unveiled for implementing lean in service industry to achieve operational excellence.

Keywords: lean service, lean literature classification, lean implementation, service industry, service excellence

Procedia PDF Downloads 375
1552 Information Overload, Information Literacy and Use of Technology by Students

Authors: Elena Krelja Kurelović, Jasminka Tomljanović, Vlatka Davidović

Abstract:

The development of web technologies and mobile devices makes creating, accessing, using and sharing information or communicating with each other simpler every day. However, while the amount of information constantly increasing it is becoming harder to effectively organize and find quality information despite the availability of web search engines, filtering and indexing tools. Although digital technologies have overall positive impact on students’ lives, frequent use of these technologies and digital media enriched with dynamic hypertext and hypermedia content, as well as multitasking, distractions caused by notifications, calls or messages; can decrease the attention span, make thinking, memorizing and learning more difficult, which can lead to stress and mental exhaustion. This is referred to as “information overload”, “information glut” or “information anxiety”. Objective of this study is to determine whether students show signs of information overload and to identify the possible predictors. Research was conducted using a questionnaire developed for the purpose of this study. The results show that students frequently use technology (computers, gadgets and digital media), while they show moderate level of information literacy. They have sometimes experienced symptoms of information overload. According to the statistical analysis, higher frequency of technology use and lower level of information literacy are correlated with larger information overload. The multiple regression analysis has confirmed that the combination of these two independent variables has statistically significant predictive capacity for information overload. Therefore, the information science teachers should pay attention to improving the level of students’ information literacy and educate them about the risks of excessive technology use.

Keywords: information overload, computers, mobile devices, digital media, information literacy, students

Procedia PDF Downloads 278
1551 Diagnostic Accuracy of the Tuberculin Skin Test for Tuberculosis Diagnosis: Interest of Using ROC Curve and Fagan’s Nomogram

Authors: Nouira Mariem, Ben Rayana Hazem, Ennigrou Samir

Abstract:

Background and aim: During the past decade, the frequency of extrapulmonary forms of tuberculosis has increased. These forms are under-diagnosed using conventional tests. The aim of this study was to evaluate the performance of the Tuberculin Skin Test (TST) for the diagnosis of tuberculosis, using the ROC curve and Fagan’s Nomogram methodology. Methods: This was a case-control, multicenter study in 11 anti-tuberculosis centers in Tunisia, during the period from June to November2014. The cases were adults aged between 18 and 55 years with confirmed tuberculosis. Controls were free from tuberculosis. A data collection sheet was filled out and a TST was performed for each participant. Diagnostic accuracy measures of TST were estimated using ROC curve and Area Under Curve to estimate sensitivity and specificity of a determined cut-off point. Fagan’s nomogram was used to estimate its predictive values. Results: Overall, 1053 patients were enrolled, composed of 339 cases (sex-ratio (M/F)=0.87) and 714 controls (sex-ratio (M/F)=0.99). The mean age was 38.3±11.8 years for cases and 33.6±11 years for controls. The mean diameter of the TST induration was significantly higher among cases than controls (13.7mm vs.6.2mm;p=10-6). Area Under Curve was 0.789 [95% CI: 0.758-0.819; p=0.01], corresponding to a moderate discriminating power for this test. The most discriminative cut-off value of the TST, which were associated with the best sensitivity (73.7%) and specificity (76.6%) couple was about 11 mm with a Youden index of 0.503. Positive and Negative predictive values were 3.11% and 99.52%, respectively. Conclusion: In view of these results, we can conclude that the TST can be used for tuberculosis diagnosis with a good sensitivity and specificity. However, the skin induration measurement and its interpretation is operator dependent and remains difficult and subjective. The combination of the TST with another test such as the Quantiferon test would be a good alternative.

Keywords: tuberculosis, tuberculin skin test, ROC curve, cut-off

Procedia PDF Downloads 67
1550 Adsorption of 17a-Ethinylestradiol on Activated Carbon Based on Sewage Sludge in Aqueous Medium

Authors: Karoline Reis de Sena

Abstract:

Endocrine disruptors are unregulated or not fully regulated compounds, even in the most developed countries, and which can be a danger to the environment and human health. They pass untreated through the secondary stage of conventional wastewater treatment plants, then the effluent from the wastewater treatment plants is discharged into the rivers, upstream and downstream from the drinking water treatment plants that use the same river water as the tributary. Long-term consumption of drinking water containing low concentrations of these compounds can cause health problems; these are persistent in nature and difficult to remove. In this way, research on emerging pollutants is expanding and is fueled by progress in finding the appropriate method for treating wastewater. Adsorption is the most common separation process, it is a simple and low-cost operation, but it is not eco-efficient. Concomitant to this, biosorption arises, which is a subcategory of adsorption where the biosorbent is biomass and which presents numerous advantages when compared to conventional treatment methods, such as low cost, high efficiency, minimization of the use of chemicals, absence of need for additional nutrients, biosorbent regeneration capacity and the biomass used in the production of biosorbents are found in abundance in nature. Thus, the use of alternative materials, such as sewage sludge, for the synthesis of adsorbents has proved to be an economically viable alternative, together with the importance of valuing the generated by-product flows, as well as managing the problem of their correct disposal. In this work, an alternative for the management of sewage sludge is proposed, transforming it into activated carbon and using it in the adsorption process of 17a-ethinylestradiol.

Keywords: 17α-ethinylestradiol, adsorption, activated carbon, sewage sludge, micropollutants

Procedia PDF Downloads 95
1549 A Script for Presentation to the Management of a Teaching Hospital on MYCIN: A Clinical Decision Support System

Authors: Rashida Suleiman, Asamoah Jnr. Boakye, Suleiman Ahmed Ibn Ahmed

Abstract:

In recent years, there has been an enormous success in discoveries of scientific knowledge in medicine coupled with the advancement of technology. Despite all these successes, diagnoses and treatment of diseases have become complex. MYCIN is a groundbreaking illustration of a clinical decision support system (CDSS), which was developed to assist physicians in the diagnosis and treatment of bacterial infections by providing suggestions for antibiotic regimens. MYCIN was one of the earliest expert systems to demonstrate how CDSSs may assist human decision-making in complicated areas. Relevant databases were searched using google scholar, PubMed and general Google search, which were peculiar to clinical decision support systems. The articles were then screened for a comprehensive overview of the functionality, consultative style and statistical usage of MYCIN, a clinical decision support system. Inferences drawn from the articles showed some usage of MYCIN for problem-based learning among clinicians and students in some countries. Furthermore, the data demonstrated that MYCIN had completed clinical testing at Stanford University Hospital following years of research. The system (MYCIN) was shown to be extremely accurate and effective in diagnosing and treating bacterial infections, and it demonstrated how CDSSs might enhance clinical decision-making in difficult circumstances. Despite the challenges MYCIN presents, the benefits of its usage to clinicians, students and software developers are enormous.

Keywords: clinical decision support system, MYCIN, diagnosis, bacterial infections, support systems

Procedia PDF Downloads 147
1548 An Examination of the Impact of Sand Dunes on Soils, Vegetation and Water Resources as the Major Means of Livelihood in Gada Local Government Area of Sokoto State, Nigeria

Authors: Abubakar Aminu

Abstract:

Sand dunes, as a major product of desertification, is well known to affect soil resources, water resources and vegetation, especially in arid and semi-arid region; this scenario disrupt the livelihood security of people in the affected areas. The research assessed the episode of sand dune accumulation on water resources, soil and vegetation in Gada local government of Sokoto State, Nigeria. In this paper, both qualitative and quantitative methods were used to generate data which was analyzed and discussed. The finding of the paper shows that livelihood was affected by accumulations of sand dunes as water resources and soils were affected negatively thereby reducing crop yields and making livestock domestication a very difficult and expensive task; the finding also shows that 60% of the respondents agreed to planting of trees as the major solution to combat sand dunes accumulation. However, the soil parameters tested indicated low Organic carbon, low Nitrogen, low Potassium, Calcium and Phosphorus but higher values were recorded in Sodium and Cation exchange capacity which served as evidence of the high or strong aridity nature of the soil in the area. In line with the above, the researcher recommended a massive tree planting campaign to curtail desertification as well as using organic manures for higher agricultural yield and as such, improvement in livelihood security.

Keywords: soils, vegetatio, water, desertification

Procedia PDF Downloads 70
1547 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 136
1546 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 349
1545 Optimization of the Drinking Water Treatment Process Improvement of the Treated Water Quality by Using the Sludge Produced by the Water Treatment Plant

Authors: M. Derraz, M. Farhaoui

Abstract:

Problem statement: In the water treatment processes, the coagulation and flocculation processes produce sludge according to the level of the water turbidity. The aluminum sulfate is the most common coagulant used in water treatment plants of Morocco as well as many countries. It is difficult to manage Sludge produced by the treatment plant. However, it can be used in the process to improve the quality of the treated water and reduce the aluminum sulfate dose. Approach: In this study, the effectiveness of sludge was evaluated at different turbidity levels (low, medium, and high turbidity) and coagulant dosage to find optimal operational conditions. The influence of settling time was also studied. A set of jar test experiments was conducted to find the sludge and aluminum sulfate dosages in order to improve the produced water quality for different turbidity levels. Results: Results demonstrated that using sludge produced by the treatment plant can improve the quality of the produced water and reduce the aluminum sulfate using. The aluminum sulfate dosage can be reduced from 40 to 50% according to the turbidity level (10, 20, and 40 NTU). Conclusions/Recommendations: Results show that sludge can be used in order to reduce the aluminum sulfate dosage and improve the quality of treated water. The highest turbidity removal efficiency is observed within 6 mg/l of aluminum sulfate and 35 mg/l of sludge in low turbidity, 20 mg/l of aluminum sulfate and 50 mg/l of sludge in medium turbidity and 20 mg/l of aluminum sulfate and 60 mg/l of sludge in high turbidity. The turbidity removal efficiency is 97.56%, 98.96%, and 99.47% respectively for low, medium and high turbidity levels.

Keywords: coagulation process, coagulant dose, sludge reuse, turbidity removal

Procedia PDF Downloads 237
1544 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: communication signal, feature extraction, Holder coefficient, improved cloud model

Procedia PDF Downloads 156
1543 Localization of Frontal and Temporal Speech Areas in Brain Tumor Patients by Their Structural Connections with Probabilistic Tractography

Authors: B.Shukir, H.Woo, P.Barzo, D.Kis

Abstract:

Preoperative brain mapping in tumors involving the speech areas has an important role to reduce surgical risks. Functional magnetic resonance imaging (fMRI) is the gold standard method to localize cortical speech areas preoperatively, but its availability in clinical routine is difficult. Diffusion MRI based probabilistic tractography is available in head MRI. It’s used to segment cortical subregions by their structural connectivity. In our study, we used probabilistic tractography to localize the frontal and temporal cortical speech areas. 15 patients with left frontal tumor were enrolled to our study. Speech fMRI and diffusion MRI acquired preoperatively. The standard automated anatomical labelling atlas 3 (AAL3) cortical atlas used to define 76 left frontal and 118 left temporal potential speech areas. 4 types of tractography were run according to the structural connection of these regions to the left arcuate fascicle (FA) to localize those cortical areas which have speech functions: 1, frontal through FA; 2, frontal with FA; 3, temporal to FA; 4, temporal with FA connections were determined. Thresholds of 1%, 5%, 10% and 15% applied. At each level, the number of affected frontal and temporal regions by fMRI and tractography were defined, the sensitivity and specificity were calculated. At the level of 1% threshold showed the best results. Sensitivity was 61,631,4% and 67,1523,12%, specificity was 87,210,4% and 75,611,37% for frontal and temporal regions, respectively. From our study, we conclude that probabilistic tractography is a reliable preoperative technique to localize cortical speech areas. However, its results are not feasible that the neurosurgeon rely on during the operation.

Keywords: brain mapping, brain tumor, fMRI, probabilistic tractography

Procedia PDF Downloads 166
1542 Modelling Heat Transfer Characteristics in the Pasteurization Process of Medium Long Necked Bottled Beers

Authors: S. K. Fasogbon, O. E. Oguegbu

Abstract:

Pasteurization is one of the most important steps in the preservation of beer products, which improves its shelf life by inactivating almost all the spoilage organisms present in it. However, there is no gain saying the fact that it is always difficult to determine the slowest heating zone, the temperature profile and pasteurization units inside bottled beer during pasteurization, hence there had been significant experimental and ANSYS fluent approaches on the problem. This work now developed Computational fluid dynamics model using COMSOL Multiphysics. The model was simulated to determine the slowest heating zone, temperature profile and pasteurization units inside the bottled beer during the pasteurization process. The results of the simulation were compared with the existing data in the literature. The results showed that, the location and size of the slowest heating zone is dependent on the time-temperature combination of each zone. The results also showed that the temperature profile of the bottled beer was found to be affected by the natural convection resulting from variation in density during pasteurization process and that the pasteurization unit increases with time subject to the temperature reached by the beer. Although the results of this work agreed with literatures in the aspects of slowest heating zone and temperature profiles, the results of pasteurization unit however did not agree. It was suspected that this must have been greatly affected by the bottle geometry, specific heat capacity and density of the beer in question. The work concludes that for effective pasteurization to be achieved, there is a need to optimize the spray water temperature and the time spent by the bottled product in each of the pasteurization zones.

Keywords: modeling, heat transfer, temperature profile, pasteurization process, bottled beer

Procedia PDF Downloads 203
1541 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models

Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai

Abstract:

Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.

Keywords: plant identification, CNN, image processing, vision transformer, classification

Procedia PDF Downloads 104
1540 A Systematic Literature Review on the Prevalence of Academic Plagiarism and Cheating in Higher Educational Institutions

Authors: Sozon, Pok Wei Fong, Sia Bee Chuan, Omar Hamdan Mohammad

Abstract:

Owing to the widespread phenomenon of plagiarism and cheating in higher education institutions (HEIs), it is now difficult to ensure academic integrity and quality education. Moreover, the COVID-19 pandemic has intensified the issue by shifting educational institutions into virtual teaching and assessment mode. Thus, there is a need to carry out an extensive and holistic systematic review of the literature to highlight plagiarism and cheating in both prevalence and form among HEIs. This paper systematically reviews the literature concerning academic plagiarism and cheating in HEIs to determine the most common forms and suggest strategies for resolution and boosting the academic integrity of students. The review included 45 articles and publications for the period from February 12, 2018, to September 12, 2022, in the Scopus database aligned with the Systematic Review and Meta-Analysis (PRISMA) guidelines in the selection, filtering, and reporting of the papers for review from which a conclusion can be drawn. Based on the results, out of the studies reviewed, 48% of the quantitative results of students were plagiarized and obtained through cheating, with 84% coming from the fields of Humanities. Moreover, Psychology and Social Sciences studies accumulated 9% and 7% articles respectively. Based on the results, individual factors, institutional factors, and social and cultural factors have contributed to plagiarism and cheating cases in HEIs. The resolution of this issue can be the establishment of ethical and moral development initiatives and modern academic policies and guidelines supported by technological strategies of testing.

Keywords: plagiarism, cheating, systematic review, academic integrity

Procedia PDF Downloads 74
1539 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 108
1538 Liquid Biopsy and Screening Biomarkers in Glioma Grading

Authors: Abdullah Abdu Qaseem Shamsan

Abstract:

Background: Gliomas represent the most frequent, heterogeneous group of tumors arising from glial cells, characterized by difficult monitoring, poor prognosis, and fatality. Tissue biopsy is an established procedure for tumor cell sampling that aids diagnosis, tumor grading, and prediction of prognosis. We studied and compared the levels of liquid biopsy markers in patients with different grades of glioma. Also, it tried to establish the potential association between glioma and specific blood groups antigen. Result: 78 patients were identified, among whom maximum percentage with glioblastoma possessed blood group O+ (53.8%). The second highest frequency had blood group A+ (20.4%), followed by B+ (9.0%) and A- (5.1%), and least with O-. Liquid biopsy biomarkers comprised of ALT, LDH, lymphocytes, Urea, Alkaline phosphatase, AST Neutrophils, and CRP. The levels of all the components increased significantly with the severity of glioma, with maximum levels seen in glioblastoma (grade IV), followed by grade III and grade II respectively. Conclusion: Gliomas possess significant clinical challenges due to their progression with heterogeneous nature and aggressive behavior. Liquid biopsy is a non-invasive approach which aids to establish the status of the patient and determine the tumor grade, therefore may show diagnostic and prognostic utility. Additionally, our study provides evidence to demonstrate the role of ABO blood group antigens in the development of glioma. However, future clinical research on liquid biopsy will improve the sensitivity and specificity of these tests and validate their clinical usefulness to guide treatment approaches.

Keywords: GBM: glioblastoma multiforme, CT: computed tomography, MRI: magnetic resonance imaging, ctRNA: circulating tumor RNA

Procedia PDF Downloads 51
1537 Mix Proportioning and Strength Prediction of High Performance Concrete Including Waste Using Artificial Neural Network

Authors: D. G. Badagha, C. D. Modhera, S. A. Vasanwala

Abstract:

There is a great challenge for civil engineering field to contribute in environment prevention by finding out alternatives of cement and natural aggregates. There is a problem of global warming due to cement utilization in concrete, so it is necessary to give sustainable solution to produce concrete containing waste. It is very difficult to produce designated grade of concrete containing different ingredient and water cement ratio including waste to achieve desired fresh and harden properties of concrete as per requirement and specifications. To achieve the desired grade of concrete, a number of trials have to be taken, and then after evaluating the different parameters at long time performance, the concrete can be finalized to use for different purposes. This research work is carried out to solve the problem of time, cost and serviceability in the field of construction. In this research work, artificial neural network introduced to fix proportion of concrete ingredient with 50% waste replacement for M20, M25, M30, M35, M40, M45, M50, M55 and M60 grades of concrete. By using the neural network, mix design of high performance concrete was finalized, and the main basic mechanical properties were predicted at 3 days, 7 days and 28 days. The predicted strength was compared with the actual experimental mix design and concrete cube strength after 3 days, 7 days and 28 days. This experimentally and neural network based mix design can be used practically in field to give cost effective, time saving, feasible and sustainable high performance concrete for different types of structures.

Keywords: artificial neural network, high performance concrete, rebound hammer, strength prediction

Procedia PDF Downloads 155