Search results for: efficient crow search algorithm
738 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles
Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.Keywords: mobile mapping, GNSS, IMU, similarity, classification
Procedia PDF Downloads 84737 Chongqing, a Megalopolis Disconnected with Its Rivers: An Assessment of Urban-Waterside Disconnect in a Chinese Megacity and Proposed Improvement Strategies, Chongqing City as a Case Study
Authors: Jaime E. Salazar Lagos
Abstract:
Chongqing is located in southwest China and is becoming one of the most significant cities in the world. Its urban territories and metropolitan-related areas have one of the largest urban populations in China and are partitioned and shaped by two of the biggest and longest rivers on Earth, the Yangtze and Jialing Rivers, making Chongqing a megalopolis intersected by rivers. Historically, Chongqing City enjoyed fundamental connections with its rivers; however, current urban development of Chongqing City has lost effective integration of the riverbanks within the urban space and structural dynamics of the city. Therefore, there exists a critical lack of physical and urban space conjoined with the rivers, which diminishes the economic, tourist, and environmental development of Chongqing. Using multi-scale satellite-map site verification the study confirmed the hypothesis and urban-waterside disconnect. Collected data demonstrated that the Chongqing urban zone, an area of 5292 square-kilometers and a water front of 203.4 kilometers, has only 23.49 kilometers of extension (just 11.5%) with high-quality physical and spatial urban-waterside connection. Compared with other metropolises around the world, this figure represents a significant lack of spatial development along the rivers, an issue that has not been successfully addressed in the last 10 years of urban development. On a macro scale, the study categorized the different kinds of relationships between the city and its riverbanks. This data was then utilized in the creation of an urban-waterfront relationship map that can be a tool for future city planning decisions and real estate development. On a micro scale, we discovered there are three primary elements that are causing the urban-waterside disconnect: extensive highways along the most dense areas and city center, large private real estate developments that do not provide adequate riverside access, and large industrial complexes that almost completely lack riverside utilization. Finally, as part of the suggested strategies, the study concludes that the most efficient and practical way to improve this situation is to follow the historic master-planning of Chongqing and create connective nodes in critical urban locations along the river, a strategy that has been used for centuries to handle the same urban-waterside relationship. Reviewing and implementing this strategy will allow the city to better connect with the rivers, reducing the various impacts of disconnect and urban transformation.Keywords: Chongqing City, megalopolis, nodes, riverbanks disconnection, urban
Procedia PDF Downloads 228736 Optimizing Hydrogen Production from Biomass Pyro-Gasification in a Multi-Staged Fluidized Bed Reactor
Authors: Chetna Mohabeer, Luis Reyes, Lokmane Abdelouahed, Bechara Taouk
Abstract:
In the transition to sustainability and the increasing use of renewable energy, hydrogen will play a key role as an energy carrier. Biomass has the potential to accelerate the realization of hydrogen as a major fuel of the future. Pyro-gasification allows the conversion of organic matter mainly into synthesis gas, or “syngas”, majorly constituted by CO, H2, CH4, and CO2. A second, condensable fraction of biomass pyro-gasification products are “tars”. Under certain conditions, tars may decompose into hydrogen and other light hydrocarbons. These conditions include two types of cracking: homogeneous cracking, where tars decompose under the effect of temperature ( > 1000 °C), and heterogeneous cracking, where catalysts such as olivine, dolomite or biochar are used. The latter process favors cracking of tars at temperatures close to pyro-gasification temperatures (~ 850 °C). Pyro-gasification of biomass coupled with water-gas shift is the most widely practiced process route for biomass to hydrogen today. In this work, an innovating solution will be proposed for this conversion route, in that all the pyro-gasification products, not only methane, will undergo processes that aim to optimize hydrogen production. First, a heterogeneous cracking step was included in the reaction scheme, using biochar (remaining solid from the pyro-gasification reaction) as catalyst and CO2 and H2O as gasifying agents. This process was followed by a catalytic steam methane reforming (SMR) step. For this, a Ni-based catalyst was tested under different reaction conditions to optimize H2 yield. Finally, a water-gas shift (WGS) reaction step with a Fe-based catalyst was added to optimize the H2 yield from CO. The reactor used for cracking was a fluidized bed reactor, and the one used for SMR and WGS was a fixed bed reactor. The gaseous products were analyzed continuously using a µ-GC (Fusion PN 074-594-P1F). With biochar as bed material, it was seen that more H2 was obtained with steam as a gasifying agent (32 mol. % vs. 15 mol. % with CO2 at 900 °C). CO and CH4 productions were also higher with steam than with CO2. Steam as gasifying agent and biochar as bed material were hence deemed efficient parameters for the first step. Among all parameters tested, CH4 conversions approaching 100 % were obtained from SMR reactions using Ni/γ-Al2O3 as a catalyst, 800 °C, and a steam/methane ratio of 5. This gave rise to about 45 mol % H2. Experiments about WGS reaction are currently being conducted. At the end of this phase, the four reactions are performed consecutively, and the results analyzed. The final aim is the development of a global kinetic model of the whole system in a multi-stage fluidized bed reactor that can be transferred on ASPEN PlusTM.Keywords: multi-staged fluidized bed reactor, pyro-gasification, steam methane reforming, water-gas shift
Procedia PDF Downloads 138735 Identification Strategies for Unknown Victims from Mass Disasters and Unknown Perpetrators from Violent Crime or Terrorist Attacks
Authors: Michael Josef Schwerer
Abstract:
Background: The identification of unknown victims from mass disasters, violent crimes, or terrorist attacks is frequently facilitated through information from missing persons lists, portrait photos, old or recent pictures showing unique characteristics of a person such as scars or tattoos, or simply reference samples from blood relatives for DNA analysis. In contrast, the identification or at least the characterization of an unknown perpetrator from criminal or terrorist actions remains challenging, particularly in the absence of material or data for comparison, such as fingerprints, which had been previously stored in criminal records. In scenarios that result in high levels of destruction of the perpetrator’s corpse, for instance, blast or fire events, the chance for a positive identification using standard techniques is further impaired. Objectives: This study shows the forensic genetic procedures in the Legal Medicine Service of the German Air Force for the identification of unknown individuals, including such cases in which reference samples are not available. Scenarios requiring such efforts predominantly involve aircraft crash investigations, which are routinely carried out by the German Air Force Centre of Aerospace Medicine as one of the Institution’s essential missions. Further, casework by military police or military intelligence is supported based on administrative cooperation. In the talk, data from study projects, as well as examples from real casework, will be demonstrated and discussed with the audience. Methods: Forensic genetic identification in our laboratories involves the analysis of Short Tandem Repeats and Single Nucleotide Polymorphisms in nuclear DNA along with mitochondrial DNA haplotyping. Extended DNA analysis involves phenotypic markers for skin, hair, and eye color together with the investigation of a person’s biogeographic ancestry. Assessment of the biological age of an individual employs CpG-island methylation analysis using bisulfite-converted DNA. Forensic Investigative Genealogy assessment allows the detection of an unknown person’s blood relatives in reference databases. Technically, end-point-PCR, real-time PCR, capillary electrophoresis, pyrosequencing as well as next generation sequencing using flow-cell-based and chip-based systems are used. Results and Discussion: Optimization of DNA extraction from various sources, including difficult matrixes like formalin-fixed, paraffin-embedded tissues, degraded specimens from decomposed bodies or from decedents exposed to blast or fire events, provides soil for successful PCR amplification and subsequent genetic profiling. For cases with extremely low yields of extracted DNA, whole genome preamplification protocols are successfully used, particularly regarding genetic phenotyping. Improved primer design for CpG-methylation analysis, together with validated sampling strategies for the analyzed substrates from, e.g., lymphocyte-rich organs, allows successful biological age estimation even in bodies with highly degraded tissue material. Conclusions: Successful identification of unknown individuals or at least their phenotypic characterization using pigmentation markers together with age-informative methylation profiles, possibly supplemented by family tree search employing Forensic Investigative Genealogy, can be provided in specialized laboratories. However, standard laboratory procedures must be adapted to work with difficult and highly degraded sample materials.Keywords: identification, forensic genetics, phenotypic markers, CPG methylation, biological age estimation, forensic investigative genealogy
Procedia PDF Downloads 51734 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets
Procedia PDF Downloads 169733 Development and Structural Characterization of a Snack Food with Added Type 4 Extruded Resistant Starch
Authors: Alberto A. Escobar Puentes, G. Adriana García, Luis F. Cuevas G., Alejandro P. Zepeda, Fernando B. Martínez, Susana A. Rincón
Abstract:
Snack foods are usually classified as ‘junk food’ because have little nutritional value. However, due to the increase on the demand and third generation (3G) snacks market, low price and easy to prepare, can be considered as carriers of compounds with certain nutritional value. Resistant starch (RS) is classified as a prebiotic fiber it helps to control metabolic problems and has anti-cancer colon properties. The active compound can be developed by chemical cross-linking of starch with phosphate salts to obtain a type 4 resistant starch (RS4). The chemical reaction can be achieved by extrusion, a process widely used to produce snack foods, since it's versatile and a low-cost procedure. Starch is the major ingredient for snacks 3G manufacture, and the seeds of sorghum contain high levels of starch (70%), the most drought-tolerant gluten-free cereal. Due to this, the aim of this research was to develop a snack (3G), with RS4 in optimal conditions extrusion (previously determined) from sorghum starch, and carry on a sensory, chemically and structural characterization. A sample (200 g) of sorghum starch was conditioned with 4% sodium trimetaphosphate/ sodium tripolyphosphate (99:1) and set to 28.5% of moisture content. Then, the sample was processed in a single screw extruder equipped with rectangular die. The inlet, transport and output temperatures were 60°C, 134°C and 70°C, respectively. The resulting pellets were expanded in a microwave oven. The expansion index (EI), penetration force (PF) and sensory analysis were evaluated in the expanded pellets. The pellets were milled to obtain flour and RS content, degree of substitution (DS), and percentage of phosphorus (% P) were measured. Spectroscopy [Fourier Transform Infrared (FTIR)], X-ray diffraction, differential scanning calorimetry (DSC) and scanning electron microscopy (SEM) analysis were performed in order to determine structural changes after the process. The results in 3G were as follows: RS, 17.14 ± 0.29%; EI, 5.66 ± 0.35 and PF, 5.73 ± 0.15 (N). Groups of phosphate were identified in the starch molecule by FTIR: DS, 0.024 ± 0.003 and %P, 0.35±0.15 [values permitted as food additives (<4 %P)]. In this work an increase of the gelatinization temperature after the crosslinking of starch was detected; the loss of granular and vapor bubbles after expansion were observed by SEM; By using X-ray diffraction, loss of crystallinity was observed after extrusion process. Finally, a snack (3G) was obtained with RS4 developed by extrusion technology. The sorghum starch was efficient for snack 3G production.Keywords: extrusion, resistant starch, snack (3G), Sorghum
Procedia PDF Downloads 309732 Collaboration with Governmental Stakeholders in Positioning Reputation on Value
Authors: Zeynep Genel
Abstract:
The concept of reputation in corporate development comes to the fore as one of the most frequently discussed topics in recent years. Many organizations, which make worldwide investments, make effort in order to adapt themselves to the topics within the scope of this concept and to promote the name of the organization through the values that might become prominent. The stakeholder groups are considered as the most important actors determining the reputation. Even, the effect of stakeholders is not evaluated as a direct factor; it is signed as indirect effects of their perception are a very strong on ultimate reputation. It is foreseen that the parallelism between the projected reputation and the perceived c reputation, which is established as a result of communication experiences perceived by the stakeholders, has an important effect on achieving these objectives. In assessing the efficiency of these efforts, the opinions of stakeholders are widely utilized. In other words, the projected reputation, in which the positive and/or negative reflections of corporate communication play effective role, is measured through how the stakeholders perceptively position the organization. From this perspective, it is thought that the interaction and cooperation of corporate communication professionals with different stakeholder groups during the reputation positioning efforts play significant role in achieving the targeted reputation or in sustainability of this value. The governmental stakeholders having intense communication with mass stakeholder groups are within the most effective stakeholder groups of organization. The most important reason of this is that the organizations, regarding which the governmental stakeholders have positive perception, inspire more confidence to the mass stakeholders. At this point, the organizations carrying out joint projects with governmental stakeholders in parallel with sustainable communication approach come to the fore as the organizations having strong reputation, whereas the reputation of organizations, which fall behind in this regard or which cannot establish the efficiency from this aspect, is thought to be perceived as weak. Similarly, the social responsibility campaigns, in which the governmental stakeholders are involved and which play efficient role in strengthening the reputation, are thought to draw more attention. From this perspective, the role and effect of governmental stakeholders on the reputation positioning is discussed in this study. In parallel with this objective, it is aimed to reveal perspectives of seven governmental stakeholders towards the cooperation in reputation positioning. The sample group representing the governmental stakeholders is examined under the lights of results obtained from in-depth interviews with the executives of different ministries. It is asserted that this study, which aims to express the importance of stakeholder participation in corporate reputation positioning especially in Turkey and the effective role of governmental stakeholders in strong reputation, might provide a new perspective on measuring the corporate reputation, as well as establishing an important source to contribute to the studies in both academic and practical domains.Keywords: collaborative communications, reputation management, stakeholder engagement, ultimate reputation
Procedia PDF Downloads 225731 Distinguishing between Bacterial and Viral Infections Based on Peripheral Human Blood Tests Using Infrared Microscopy and Multivariate Analysis
Authors: H. Agbaria, A. Salman, M. Huleihel, G. Beck, D. H. Rich, S. Mordechai, J. Kapelushnik
Abstract:
Viral and bacterial infections are responsible for variety of diseases. These infections have similar symptoms like fever, sneezing, inflammation, vomiting, diarrhea and fatigue. Thus, physicians may encounter difficulties in distinguishing between viral and bacterial infections based on these symptoms. Bacterial infections differ from viral infections in many other important respects regarding the response to various medications and the structure of the organisms. In many cases, it is difficult to know the origin of the infection. The physician orders a blood, urine test, or 'culture test' of tissue to diagnose the infection type when it is necessary. Using these methods, the time that elapses between the receipt of patient material and the presentation of the test results to the clinician is typically too long ( > 24 hours). This time is crucial in many cases for saving the life of the patient and for planning the right medical treatment. Thus, rapid identification of bacterial and viral infections in the lab is of great importance for effective treatment especially in cases of emergency. Blood was collected from 50 patients with confirmed viral infection and 50 with confirmed bacterial infection. White blood cells (WBCs) and plasma were isolated and deposited on a zinc selenide slide, dried and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. The acquired spectra of WBCs and plasma were analyzed in order to differentiate between the two types of infections. In this study, the potential of FTIR microscopy in tandem with multivariate analysis was evaluated for the identification of the agent that causes the human infection. The method was used to identify the infectious agent type as either bacterial or viral, based on an analysis of the blood components [i.e., white blood cells (WBC) and plasma] using their infrared vibrational spectra. The time required for the analysis and evaluation after obtaining the blood sample was less than one hour. In the analysis, minute spectral differences in several bands of the FTIR spectra of WBCs were observed between groups of samples with viral and bacterial infections. By employing the techniques of feature extraction with linear discriminant analysis (LDA), a sensitivity of ~92 % and a specificity of ~86 % for an infection type diagnosis was achieved. The present preliminary study suggests that FTIR spectroscopy of WBCs is a potentially feasible and efficient tool for the diagnosis of the infection type.Keywords: viral infection, bacterial infection, linear discriminant analysis, plasma, white blood cells, infrared spectroscopy
Procedia PDF Downloads 224730 Urban Waste Water Governance in South Africa: A Case Study of Stellenbosch
Authors: R. Malisa, E. Schwella, K. I. Theletsane
Abstract:
Due to climate change, population growth and rapid urbanization, the demand for water in South Africa is inevitably surpassing supply. To address similar challenges globally, there has been a paradigm shift from conventional urban waste water management “government” to a “governance” paradigm. From the governance paradigm, Integrated Urban Water Management (IUWM) principle emerged. This principle emphasizes efficient urban waste water treatment and production of high-quality recyclable effluent. In so doing mimicking natural water systems, in their processes of recycling water efficiently, and averting depletion of natural water resources. The objective of this study was to investigate drivers of shifting the current urban waste water management approach from a “government” paradigm towards “governance”. The study was conducted through Interactive Management soft systems research methodology which follows a qualitative research design. A case study methodology was employed, guided by realism research philosophy. Qualitative data gathered were analyzed through interpretative structural modelling using Concept Star for Professionals Decision-Making tools (CSPDM) version 3.64. The constructed model deduced that the main drivers in shifting the Stellenbosch municipal urban waste water management towards IUWM “governance” principles are mainly social elements characterized by overambitious expectations of the public on municipal water service delivery, mis-interpretation of the constitution on access to adequate clean water and sanitation as a human right and perceptions on recycling water by different communities. Inadequate public participation also emerged as a strong driver. However, disruptive events such as draught may play a positive role in raising an awareness on the value of water, resulting in a shift on the perceptions on recycled water. Once the social elements are addressed, the alignment of governance and administration elements towards IUWM are achievable. Hence, the point of departure for the desired paradigm shift is the change of water service authorities and serviced communities’ perceptions and behaviors towards shifting urban waste water management approaches from “government” to “governance” paradigm.Keywords: integrated urban water management, urban water system, wastewater governance, wastewater treatment works
Procedia PDF Downloads 156729 Experimental Recovery of Gold, Silver and Palladium from Electronic Wastes Using Ionic Liquids BmimHSO4 and BmimCl as Solvents
Authors: Lisa Shambare, Jean Mulopo, Sehliselo Ndlovu
Abstract:
One of the major challenges of sustainable development is promoting an industry which is both ecologically durable and economically viable. This requires processes that are material and energy efficient whilst also being able to limit the production of waste and toxic effluents through effective methods of process synthesis and intensification. In South Africa and globally, both miniaturisation and technological advances have substantially increased the amount of electronic wastes (e-waste) generated annually. Vast amounts of e-waste are being generated yearly with only a minute quantity being recycled officially. The passion for electronic devices cannot ignore the scarcity and cost of mining the noble metal resources which contribute significantly to the efficiency of most electronic devices. It has hence become imperative especially in an African context that sustainable strategies which are environmentally friendly be developed for recycling of the noble metals from e-waste. This paper investigates the recovery of gold, silver and palladium from electronic wastes, which consists of a vast array of metals, using ionic liquids which have the potential of reducing the gaseous and aqueous emissions associated with existing hydrometallurgical and pyrometallurgical technologies while also maintaining the economy of the overall recycling scheme through solvent recovery. The ionic liquids 1-butyl-3-methyl imidazolium hydrogen sulphate (BmimHSO4) which behaves like a protic acid and was used in the present research for the selective leaching of gold and silver from e-waste. Different concentrations of the aqueous ionic liquid were used in the experiments ranging from 10% to 50%. Thiourea was used as the complexing agent in the investigation with Fe3+ as the oxidant. The pH of the reaction was maintained in the range of 0.8 to 1.5. The preliminary investigations conducted were successful in the leaching of silver and palladium at room temperature with optimum results being at 48hrs. The leaching results could not be explained because of the leaching of palladium with the absence of gold. Hence a conclusion could not be drawn and there was the need for further experiments to be run. The leaching of palladium was carried out with hydrogen peroxide as oxidant and 1-butyl-3-methyl imidazolium chloride (BmimCl) as the solvent. The experiments at carried out at a temperature of 60 degrees celsius and a very low pH. The chloride ion was used to complex with palladium metal. From the preliminary results, it could be concluded that pretreatment of the treatment e-waste was necessary to improve the efficiency of the metal recovery process. A conclusion could not be drawn for the leaching experiments.Keywords: BmimCl, BmimHSO4, gold, palladium, silver
Procedia PDF Downloads 291728 Effects of Probiotic Pseudomonas fluorescens on the Growth Performance, Immune Modulation, and Histopathology of African Catfish (Clarias gariepinus)
Authors: Nelson R. Osungbemiro, O. A. Bello-Olusoji, M. Oladipupo
Abstract:
This study was carried out to determine the effects of probiotics Pseudomonas fluorescens on the growth performance, histology examination and immune modulation of African Catfish, (Clarias gariepinus) challenged with Clostridium botulinum. P. fluorescens, and C. botulinum isolates were removed from the gut, gill and skin organs of procured adult samples of Clarias gariepinus from commercial fish farms in Akure, Ondo State, Nigeria. The physical and biochemical tests were performed on the bacterial isolates using standard microbiological techniques for their identification. Antibacterial activity tests on P. fluorescens showed inhibition zone with mean value of 3.7 mm which indicates high level of antagonism. The experimental diets were prepared at different probiotics bacterial concentration comprises of five treatments of different bacterial suspension, including the control (T1), T2 (10³), T3 (10⁵), T4 (10⁷) and T5 (10⁹). Three replicates for each treatment type were prepared. Growth performance and nutrients utilization indices were calculated. The proximate analysis of fish carcass and experimental diet was carried out using standard methods. After feeding for 70 days, haematological values and histological test were done following standard methods; also a subgroup from each experimental treatment was challenged by inoculating Intraperitonieally (I/P) with different concentration of pathogenic C. botulinum. Statistically, there were significant differences (P < 0.05) in the growth performance and nutrient utilization of C. gariepinus. Best weight gain and feed conversion ratio were recorded in fish fed T4 (10⁷) and poorest value obtained in the control. Haematological analyses of C. gariepinus fed the experimental diets indicated that all the fish fed diets with P. fluorescens had marked significantly (p < 0.05) higher White Blood Cell than the control diet. The results of the challenge test showed that fish fed the control diet had the highest mortality rate. Histological examination of the gill, intestine, and liver of fish in this study showed several histopathological alterations in fish fed the control diets compared with those fed the P. fluorescens diets. The study indicated that the optimum level of P. fluorescens required for C. gariepinus growth and white blood cells formation is 10⁷ CFU g⁻¹, while carcass protein deposition required 10⁵ CFU g⁻¹ of P. fluorescens concentration. The study also confirmed P. fluorescens as efficient probiotics that is capable of improving the immune response of C. gariepinus against the attack of a virulent fish pathogen, C. botulinum.Keywords: Clarias gariepinus, Clostridium botulinum, probiotics, Pseudomonas fluorescens
Procedia PDF Downloads 163727 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study
Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre
Abstract:
Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.
Procedia PDF Downloads 111726 Passing-On Cultural Heritage Knowledge: Entrepreneurial Approaches for a Higher Educational Sustainability
Authors: Ioana Simina Frincu
Abstract:
As institutional initiatives often fail to provide good practices when it comes to heritage management or to adapt to the changing environment in which they function and to the audiences they address, private actions represent viable strategies for sustainable knowledge acquisition. Information dissemination to future generations is one of the key aspects in preserving cultural heritage and is successfully feasible even in the absence of original artifacts. Combined with the (re)discovery of natural landscape, open-air exploratory approaches (archeoparks) versus an enclosed monodisciplinary rigid framework (traditional museums) are more likely to 'speak the language' of a larger number of people, belonging to a variety of categories, ages, and professions. Interactive sites are efficient ways of stimulating heritage awareness and increasing the number of visitors of non-interactive/static cultural institutions owning original pieces of history, delivering specialized information, and making continuous efforts to preserve historical evidence (relics, manuscripts, etc.). It is high time entrepreneurs took over the role of promoting cultural heritage, bet it under a more commercial yet more attractive form (business). Inclusive, participatory type of activities conceived by experts from different domains/fields (history, anthropology, tourism, sociology, business management, integrative sustainability, etc.) have better chances to ensure long term cultural benefits for both adults and children, especially when and where the educational discourse fails. These unique self-experience leisure activities, which offer everyone the opportunity to recreate history by him-/her-self, to relive the ancestors’ way of living, surviving and exploring should be regarded not as pseudo-scientific approaches but as important pre-steps to museum experiences. In order to support this theory, focus will be laid on two different examples: one dynamic, in the outdoors (the Boario Terme Archeopark from Italy) and one experimental, held indoor (the reconstruction of the Neolithic sanctuary of Parta, Romania as part of a transdisciplinary academic course) and their impact on young generations. The conclusion of this study shows that the increasingly lower engagement of youth (students) in discovering and understanding history, archaeology, and heritage can be revived by entrepreneurial projects.Keywords: archeopark, educational tourism, open air museum, Parta sanctuary, prehistory
Procedia PDF Downloads 139725 HRCT of the Chest and the Role of Artificial Intelligence in the Evaluation of Patients with COVID-19
Authors: Parisa Mansour
Abstract:
Introduction: Early diagnosis of coronavirus disease (COVID-19) is extremely important to isolate and treat patients in time, thus preventing the spread of the disease, improving prognosis and reducing mortality. High-resolution computed tomography (HRCT) chest imaging and artificial intelligence (AI)-based analysis of HRCT chest images can play a central role in the treatment of patients with COVID-19. Objective: To investigate different chest HRCT findings in different stages of COVID-19 pneumonia and to evaluate the potential role of artificial intelligence in the quantitative assessment of lung parenchymal involvement in COVID-19 pneumonia. Materials and Methods: This retrospective observational study was conducted between May 1, 2020 and August 13, 2020. The study included 2169 patients with COVID-19 who underwent chest HRCT. HRCT images showed the presence and distribution of lesions such as: ground glass opacity (GGO), compaction, and any special patterns such as septal thickening, inverted halo, mark, etc. HRCT findings of the breast at different stages of the disease (early: andlt) 5 days, intermediate: 6-10 days and late stage: >10 days). A CT severity score (CTSS) was calculated based on the extent of lung involvement on HRCT, which was then correlated with clinical disease severity. Use of artificial intelligence; Analysis of CT pneumonia and quot; An algorithm was used to quantify the extent of pulmonary involvement by calculating the percentage of pulmonary opacity (PO) and gross opacity (PHO). Depending on the type of variables, statistically significant tests such as chi-square, analysis of variance (ANOVA) and post hoc tests were applied when appropriate. Results: Radiological findings were observed in HRCT chest in 1438 patients. A typical pattern of COVID-19 pneumonia, i.e., bilateral peripheral GGO with or without consolidation, was observed in 846 patients. About 294 asymptomatic patients were radiologically positive. Chest HRCT in the early stages of the disease mostly showed GGO. The late stage was indicated by such features as retinal enlargement, thickening and the presence of fibrous bands. Approximately 91.3% of cases with a CTSS = 7 were asymptomatic or clinically mild, while 81.2% of cases with a score = 15 were clinically severe. Mean PO and PHO (30.1 ± 28.0 and 8.4 ± 10.4, respectively) were significantly higher in the clinically severe categories. Conclusion: Because COVID-19 pneumonia progresses rapidly, radiologists and physicians should become familiar with typical TC chest findings to treat patients early, ultimately improving prognosis and reducing mortality. Artificial intelligence can be a valuable tool in treating patients with COVID-19.Keywords: chest, HRCT, covid-19, artificial intelligence, chest HRCT
Procedia PDF Downloads 63724 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 79723 O-Functionalized CNT Mediated CO Hydro-Deoxygenation and Chain Growth
Authors: K. Mondal, S. Talapatra, M. Terrones, S. Pokhrel, C. Frizzel, B. Sumpter, V. Meunier, A. L. Elias
Abstract:
Worldwide energy independence is reliant on the ability to leverage locally available resources for fuel production. Recently, syngas produced through gasification of carbonaceous materials provided a gateway to a host of processes for the production of various chemicals including transportation fuels. The basis of the production of gasoline and diesel-like fuels is the Fischer Tropsch Synthesis (FTS) process: A catalyzed chemical reaction that converts a mixture of carbon monoxide (CO) and hydrogen (H2) into long chain hydrocarbons. Until now, it has been argued that only transition metal catalysts (usually Co or Fe) are active toward the CO hydrogenation and subsequent chain growth in the presence of hydrogen. In this paper, we demonstrate that carbon nanotube (CNT) surfaces are also capable of hydro-deoxygenating CO and producing long chain hydrocarbons similar to that obtained through the FTS but with orders of magnitude higher conversion efficiencies than the present state-of-the-art FTS catalysts. We have used advanced experimental tools such as XPS and microscopy techniques to characterize CNTs and identify C-O functional groups as the active sites for the enhanced catalytic activity. Furthermore, we have conducted quantum Density Functional Theory (DFT) calculations to confirm that C-O groups (inherent on CNT surfaces) could indeed be catalytically active towards reduction of CO with H2, and capable of sustaining chain growth. The DFT calculations have shown that the kinetically and thermodynamically feasible route for CO insertion and hydro-deoxygenation are different from that on transition metal catalysts. Experiments on a continuous flow tubular reactor with various nearly metal-free CNTs have been carried out and the products have been analyzed. CNTs functionalized by various methods were evaluated under different conditions. Reactor tests revealed that the hydrogen pre-treatment reduced the activity of the catalysts to negligible levels. Without the pretreatment, the activity for CO conversion as found to be 7 µmol CO/g CNT/s. The O-functionalized samples showed very activities greater than 85 µmol CO/g CNT/s with nearly 100% conversion. Analyses show that CO hydro-deoxygenation occurred at the C-O/O-H functional groups. It was found that while the products were similar to FT products, differences in selectivities were observed which, in turn, was a result of a different catalytic mechanism. These findings now open a new paradigm for CNT-based hydrogenation catalysts and constitute a defining point for obtaining clean, earth abundant, alternative fuels through the use of efficient and renewable catalyst.Keywords: CNT, CO Hydrodeoxygenation, DFT, liquid fuels, XPS, XTL
Procedia PDF Downloads 347722 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale
Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize
Abstract:
Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy
Procedia PDF Downloads 99721 Chromium (VI) Removal from Aqueous Solutions by Ion Exchange Processing Using Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071 Resins: Batch Ion Exchange Modeling
Authors: Havva Tutar Kahraman, Erol Pehlivan
Abstract:
In recent years, environmental pollution by wastewater rises very critically. Effluents discharged from various industries cause this challenge. Different type of pollutants such as organic compounds, oxyanions, and heavy metal ions create this threat for human bodies and all other living things. However, heavy metals are considered one of the main pollutant groups of wastewater. Therefore, this case creates a great need to apply and enhance the water treatment technologies. Among adopted treatment technologies, adsorption process is one of the methods, which is gaining more and more attention because of its easy operations, the simplicity of design and versatility. Ion exchange process is one of the preferred methods for removal of heavy metal ions from aqueous solutions. It has found widespread application in water remediation technologies, during the past several decades. Therefore, the purpose of this study is to the removal of hexavalent chromium, Cr(VI), from aqueous solutions. Cr(VI) is considered as a well-known highly toxic metal which modifies the DNA transcription process and causes important chromosomic aberrations. The treatment and removal of this heavy metal have received great attention to maintaining its allowed legal standards. The purpose of the present paper is an attempt to investigate some aspects of the use of three anion exchange resins: Eichrom 1-X4, Lewatit Monoplus M800 and Lewatit A8071. Batch adsorption experiments were carried out to evaluate the adsorption capacity of these three commercial resins in the removal of Cr(VI) from aqueous solutions. The chromium solutions used in the experiments were synthetic solutions. The parameters that affect the adsorption, solution pH, adsorbent concentration, contact time, and initial Cr(VI) concentration, were performed at room temperature. High adsorption rates of metal ions for the three resins were reported at the onset, and then plateau values were gradually reached within 60 min. The optimum pH for Cr(VI) adsorption was found as 3.0 for these three resins. The adsorption decreases with the increase in pH for three anion exchangers. The suitability of Freundlich, Langmuir and Scatchard models were investigated for Cr(VI)-resin equilibrium. Results, obtained in this study, demonstrate excellent comparability between three anion exchange resins indicating that Eichrom 1-X4 is more effective and showing highest adsorption capacity for the removal of Cr(VI) ions. Investigated anion exchange resins in this study can be used for the efficient removal of chromium from water and wastewater.Keywords: adsorption, anion exchange resin, chromium, kinetics
Procedia PDF Downloads 260720 Discourse Analysis: Where Cognition Meets Communication
Authors: Iryna Biskub
Abstract:
The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.Keywords: cognition, communication, discourse, strategy
Procedia PDF Downloads 254719 Disruptions to Medical Education during COVID-19: Perceptions and Recommendations from Students at the University of the West, Indies, Jamaica
Authors: Charléa M. Smith, Raiden L. Schodowski, Arletty Pinel
Abstract:
Due to the COVID-19 pandemic, the Faculty of Medical Sciences of The University of the West Indies (UWI) Mona in Kingston, Jamaica, had to rapidly migrate to digital and blended learning. Students in the preclinical stage of the program transitioned to full-time online learning, while students in the clinical stage experienced decreased daily patient contact and the implementation of a blend of online lectures and virtual clinical practice. Such sudden changes were coupled with the institutional pressure of the need to introduce a novel approach to education without much time for preparation, as well as additional strain endured by the faculty, who were overwhelmed by serving as frontline workers. During the period July 20 to August 23, 2021, this study surveyed preclinical and clinical students to capture their experiences with these changes and their recommendations for future use of digital modalities of learning to enhance medical education. It was conducted with a fellow student of the 2021 cohort of the MultiPod mentoring program. A questionnaire was developed and distributed digitally via WhatsApp to all medical students of the UWI Mona campus to assess students’ experiences and perceptions of the advantages, challenges, and impact on individual knowledge proficiencies brought about by the transition to predominantly digital learning environments. 108 students replied, 53.7% preclinical and 46.3% clinical. 67.6% of the total were female and 30.6 % were male; 1.8% did not identify themselves by gender. 67.2% of preclinical students preferred blended learning and 60.3% considered that the content presented did not prepare them for clinical work. Only 31% considered that the online classes were interactive and encouraged student participation. 84.5% missed socialization with classmates and friends and 79.3% missed a focused environment for learning. 80% of the clinical students felt that they had not learned all that they expected and only 34% had virtual interaction with patients, mostly by telephone and video calls. Observing direct consultations was considered the most useful, yet this was the least-used modality. 96% of the preclinical students and 100% of the clinical ones supplemented their learning with additional online tools. The main recommendations from the survey are the use of interactive teaching strategies, more discussion time with lecturers, and increased virtual interactions with patients. Universities are returning to face-to-face learning, yet it is unlikely that blended education will disappear. This study demonstrates that students’ perceptions of their experience during mobility restrictions must be taken into consideration in creating more effective, inclusive, and efficient blended learning opportunities.Keywords: blended learning, digital learning, medical education, student perceptions
Procedia PDF Downloads 166718 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software
Authors: Junior Akunzi
Abstract:
In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance
Procedia PDF Downloads 216717 Dynamic Analysis of Commodity Price Fluctuation and Fiscal Management in Sub-Saharan Africa
Authors: Abidemi C. Adegboye, Nosakhare Ikponmwosa, Rogers A. Akinsokeji
Abstract:
For many resource-rich developing countries, fiscal policy has become a key tool used for short-run fiscal management since it is considered as playing a critical role in injecting part of resource rents into the economies. However, given its instability, reliance on revenue from commodity exports renders fiscal management, budgetary planning and the efficient use of public resources difficult. In this study, the linkage between commodity prices and fiscal operations among a sample of commodity-exporting countries in sub-Saharan Africa (SSA) is investigated. The main question is whether commodity price fluctuations affects the effectiveness of fiscal policy as a macroeconomic stabilization tool in these countries. Fiscal management effectiveness is considered as the ability of fiscal policy to react countercyclically to output gaps in the economy. Fiscal policy is measured as the ratio of fiscal deficit to GDP and the ratio of government spending to GDP, output gap is measured as a Hodrick-Prescott filter of output growth for each country, while commodity prices are associated with each country based on its main export commodity. Given the dynamic nature of fiscal policy effects on the economy overtime, a dynamic framework is devised for the empirical analysis. The panel cointegration and error correction methodology is used to explain the relationships. In particular, the study employs the panel ECM technique to trace short-term effects of commodity prices on fiscal management and also uses the fully modified OLS (FMOLS) technique to determine the long run relationships. These procedures provide sufficient estimation of the dynamic effects of commodity prices on fiscal policy. Data used cover the period 1992 to 2016 for 11 SSA countries. The study finds that the elasticity of the fiscal policy measures with respect to the output gap is significant and positive, suggesting that fiscal policy is actually procyclical among the countries in the sample. This implies that fiscal management for these countries follows the trend of economic performance. Moreover, it is found that fiscal policy has not performed well in delivering macroeconomic stabilization for these countries. The difficulty in applying fiscal stabilization measures is attributable to the unstable revenue inflows due to the highly volatile nature of commodity prices in the international market. For commodity-exporting countries in SSA to improve fiscal management, therefore, fiscal planning should be largely decoupled from commodity revenues, domestic revenue bases must be improved, and longer period perspectives in fiscal policy management are the critical suggestions in this study.Keywords: commodity prices, ECM, fiscal policy, fiscal procyclicality, fully modified OLS, sub-saharan africa
Procedia PDF Downloads 164716 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System
Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu
Abstract:
Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress
Procedia PDF Downloads 209715 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss
Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin
Abstract:
Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links
Procedia PDF Downloads 128714 Isolation of Nitrosoguanidine Induced NaCl Tolerant Mutant of Spirulina platensis with Improved Growth and Phycocyanin Production
Authors: Apurva Gupta, Surendra Singh
Abstract:
Spirulina spp., as a promising source of many commercially valuable products, is grown photo autotrophically in open ponds and raceways on a large scale. However, the economic exploitation in an open system seems to have been limited because of lack of multiple stress-tolerant strains. The present study aims to isolate a stable stress tolerant mutant of Spirulina platensis with improved growth rate and enhanced potential to produce its commercially valuable bioactive compounds. N-methyl-n'-nitro-n-nitrosoguanidine (NTG) at 250 μg/mL (concentration permitted 1% survival) was employed for chemical mutagenesis to generate random mutants and screened against NaCl. In a preliminary experiment, wild type S. platensis was treated with NaCl concentrations from 0.5-1.5 M to calculate its LC₅₀. Mutagenized colonies were then screened for tolerance at 0.8 M NaCl (LC₅₀), and the surviving colonies were designated as NaCl tolerant mutants of S. platensis. The mutant cells exhibited 1.5 times improved growth against NaCl stress as compared to the wild type strain in control conditions. This might be due to the ability of the mutant cells to protect its metabolic machinery against inhibitory effects of salt stress. Salt stress is known to adversely affect the rate of photosynthesis in cyanobacteria by causing degradation of the pigments. Interestingly, the mutant cells were able to protect its photosynthetic machinery and exhibited 4.23 and 1.72 times enhanced accumulation of Chl a and phycobiliproteins, respectively, which resulted in enhanced rate of photosynthesis (2.43 times) and respiration (1.38 times) against salt stress. Phycocyanin production in mutant cells was observed to enhance by 1.63 fold. Nitrogen metabolism plays a vital role in conferring halotolerance to cyanobacterial cells by influx of nitrate and efflux of Na+ ions from the cell. The NaCl tolerant mutant cells took up 2.29 times more nitrate as compared to the wild type and efficiently reduce it. Nitrate reductase and nitrite reductase activity in the mutant cells also improved by 2.45 and 2.31 times, respectively against salt stress. From these preliminary results, it could be deduced that enhanced nitrogen uptake and its efficient reduction might be a reason for adaptive and halotolerant behavior of the S. platensis mutant cells. Also, the NaCl tolerant mutant of S. platensis with significant improved growth and phycocyanin accumulation compared to the wild type can be commercially promising.Keywords: chemical mutagenesis, NaCl tolerant mutant, nitrogen metabolism, photosynthetic machinery, phycocyanin
Procedia PDF Downloads 168713 Hierarchical Zeolites as Potential Carriers of Curcumin
Authors: Ewelina Musielak, Agnieszka Feliczak-Guzik, Izabela Nowak
Abstract:
Based on the latest data, it is expected that the substances of therapeutic interest used will be as natural as possible. Therefore, active substances with the highest possible efficacy and low toxicity are sought. Among natural substances with therapeutic effects, those of plant origin stand out. Curcumin isolated from the Curcuma longa plant has proven to be particularly important from a medical point of view. Due to its ability to regulate many important transcription factors, cytokines, and protein kinases, curcumin has found use as an anti-inflammatory, antioxidant, antiproliferative, antiangiogenic, and anticancer agent. The unfavorable properties of curcumin, such as low solubility, poor bioavailability, and rapid degradation under neutral or alkaline pH conditions, limit its clinical application. These problems can be solved by combining curcumin with suitable carriers such as hierarchical zeolites. This is a new class of materials that exhibit several advantages. Hierarchical zeolites used as drug carriers enable delayed release of the active ingredient and promote drug transport to the desired tissues and organs. In addition, hierarchical zeolites play an important role in regulating micronutrient levels in the body and have been used successfully in cancer diagnosis and therapy. To apply curcumin to hierarchical zeolites synthesized from commercial FAU zeolite, solutions containing curcumin, carrier and acetone were prepared. The prepared mixtures were then stirred on a magnetic stirrer for 24 h at room temperature. The curcumin-filled hierarchical zeolites were drained into a glass funnel, where they were washed three times with acetone and distilled water, after which the obtained material was air-dried until completely dry. In addition, the effect of piperine addition to zeolite carrier containing a sufficient amount of curcumin was studied. The resulting products were weighed and the percentage of pure curcumin in the hierarchical zeolite was calculated. All the synthesized materials were characterized by several techniques: elemental analysis, transmission electron microscopy (TEM), Fourier transform infrared spectroscopy, Fourier transform infrared (FT-IR), N2 adsorption, and X-ray diffraction (XRD) and thermogravimetric analysis (TGA). The aim of the presented study was to improve the biological activity of curcumin by applying it to hierarchical zeolites based on FAU zeolite. The results showed that the loading efficiency of curcumin into hierarchical zeolites based on commercial FAU-type zeolite is enhanced by modifying the zeolite carrier itself. The hierarchical zeolites proved to be very good and efficient carriers of plant-derived active ingredients such as curcumin.Keywords: carriers of active substances, curcumin, hierarchical zeolites, incorporation
Procedia PDF Downloads 98712 Additive Manufacturing of Microstructured Optical Waveguides Using Two-Photon Polymerization
Authors: Leonnel Mhuka
Abstract:
Background: The field of photonics has witnessed substantial growth, with an increasing demand for miniaturized and high-performance optical components. Microstructured optical waveguides have gained significant attention due to their ability to confine and manipulate light at the subwavelength scale. Conventional fabrication methods, however, face limitations in achieving intricate and customizable waveguide structures. Two-photon polymerization (TPP) emerges as a promising additive manufacturing technique, enabling the fabrication of complex 3D microstructures with submicron resolution. Objectives: This experiment aimed to utilize two-photon polymerization to fabricate microstructured optical waveguides with precise control over geometry and dimensions. The objective was to demonstrate the feasibility of TPP as an additive manufacturing method for producing functional waveguide devices with enhanced performance. Methods: A femtosecond laser system operating at a wavelength of 800 nm was employed for two-photon polymerization. A custom-designed CAD model of the microstructured waveguide was converted into G-code, which guided the laser focus through a photosensitive polymer material. The waveguide structures were fabricated using a layer-by-layer approach, with each layer formed by localized polymerization induced by non-linear absorption of the laser light. Characterization of the fabricated waveguides included optical microscopy, scanning electron microscopy, and optical transmission measurements. The optical properties, such as mode confinement and propagation losses, were evaluated to assess the performance of the additive manufactured waveguides. Conclusion: The experiment successfully demonstrated the additive manufacturing of microstructured optical waveguides using two-photon polymerization. Optical microscopy and scanning electron microscopy revealed the intricate 3D structures with submicron resolution. The measured optical transmission indicated efficient light propagation through the fabricated waveguides. The waveguides exhibited well-defined mode confinement and relatively low propagation losses, showcasing the potential of TPP-based additive manufacturing for photonics applications. The experiment highlighted the advantages of TPP in achieving high-resolution, customized, and functional microstructured optical waveguides. Conclusion: his experiment substantiates the viability of two-photon polymerization as an innovative additive manufacturing technique for producing complex microstructured optical waveguides. The successful fabrication and characterization of these waveguides open doors to further advancements in the field of photonics, enabling the development of high-performance integrated optical devices for various applicationsKeywords: Additive Manufacturing, Microstructured Optical Waveguides, Two-Photon Polymerization, Photonics Applications
Procedia PDF Downloads 100711 Mapping of Renovation Potential in Rudersdal Municipality Based on a Sustainability Indicator Framework
Authors: Barbara Eschen Danielsen, Morten Niels Baxter, Per Sieverts Nielsen
Abstract:
Europe is currently in an energy and climate crisis, which requires more sustainable solutions than what has been used to before. Europe uses 40% of its energy in buildings so there has come a significant focus on trying to find and commit to new initiatives to reduce energy consumption in buildings. The European Union has introduced a building standard in 2021 to be upheld by 2030. This new building standard requires a significant reduction of CO2 emissions from both privately and publicly owned buildings. The overall aim is to achieve a zero-emission building stock by 2050. EU is revising the Energy Performance of Buildings Directive (EPBD) as part of the “Fit for 55” package. It was adopted on March 14, 2023. The new directive’s main goal is to renovate the least energy-efficient homes in Europe. There will be a cost for the home owner with a renovation project, but there will also be an improvement in energy efficiency and, therefore, a cost reduction. After the implementation of the EU directive, many homeowners will have to focus their attention on how to make the most effective energy renovations of their homes. The new EU directive will affect almost one million Danish homes (30%), as they do not meet the newly implemented requirements for energy efficiency. The problem for this one mio homeowners is that it is not easy to decide which renovation project they should consider. The houses are build differently and there are many possible solutions. The main focus of this paper is to identify the most impactful solutions and evaluate their impact and evaluating them with a criteria based sustainability indicator framework. The result of the analysis give each homeowner an insight in the various renovation options, including both advantages and disadvantages with the aim of avoiding unnecessary costs and errors while minimizing their CO2 footprint. Given that the new EU directive impacts a significant number of home owners and their homes both in Denmark and the rest of the European Union it is crucial to clarify which renovations have the most environmental impact and most cost effective. We have evaluated the 10 most impactful solutions and evaluated their impact in an indicator framework which includes 9 indicators and covers economic, environmental as well as social factors. We have packaged the result of the analysis in three packages, the most cost effective (short term), the most cost effective (long-term) and the most sustainable. The results of the study secure transparency and thereby provides homeowners with a tool to help their decision-making. The analysis is based on mostly qualitative indicators, but it will be possible to evaluate most of the indicators quantitively in a future study.Keywords: energy efficiency, building renovation, renovation solutions, building energy performance criteria
Procedia PDF Downloads 88710 Data Mining in Healthcare for Predictive Analytics
Authors: Ruzanna Muradyan
Abstract:
Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health
Procedia PDF Downloads 63709 Role of Artificial Intelligence in Nano Proteomics
Authors: Mehrnaz Mostafavi
Abstract:
Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence
Procedia PDF Downloads 95