Search results for: waste classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4763

Search results for: waste classification

563 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 330
562 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning

Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu

Abstract:

Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.

Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.

Procedia PDF Downloads 87
561 Fermentation of Pretreated Herbaceous Cellulosic Wastes to Ethanol by Anaerobic Cellulolytic and Saccharolytic Thermophilic Clostridia

Authors: Lali Kutateladze, Tamar Urushadze, Tamar Dudauri, Besarion Metreveli, Nino Zakariashvili, Izolda Khokhashvili, Maya Jobava

Abstract:

Lignocellulosic waste streams from agriculture, paper and wood industry are renewable, plentiful and low-cost raw materials that can be used for large-scale production of liquid and gaseous biofuels. As opposed to prevailing multi-stage biotechnological processes developed for bioconversion of cellulosic substrates to ethanol where high-cost cellulase preparations are used, Consolidated Bioprocessing (CBP) offers to accomplish cellulose and xylan hydrolysis followed by fermentation of both C6 and C5 sugars to ethanol in a single-stage process. Syntrophic microbial consortium comprising of anaerobic, thermophilic, cellulolytic, and saccharolytic bacteria in the genus Clostridia with improved ethanol productivity and high tolerance to fermentation end-products had been proposed for achieving CBP. 65 new strains of anaerobic thermophilic cellulolytic and saccharolytic Clostridia were isolated from different wetlands and hot springs in Georgia. Using new isolates, fermentation of mechanically pretreated wheat straw and corn stalks was done under oxygen-free nitrogen environment in thermophilic conditions (T=550C) and pH 7.1. Process duration was 120 hours. Liquid and gaseous products of fermentation were analyzed on a daily basis using Perkin-Elmer gas chromatographs with flame ionization and thermal detectors. Residual cellulose, xylan, xylose, and glucose were determined using standard methods. Cellulolytic and saccharolytic bacteria strains degraded mechanically pretreated herbaceous cellulosic wastes and fermented glucose and xylose to ethanol, acetic acid and gaseous products like hydrogen and CO2. Specifically, maximum yield of ethanol was reached at 96 h of fermentation and varied between 2.9 – 3.2 g/ 10 g of substrate. The content of acetic acid didn’t exceed 0.35 g/l. Other volatile fatty acids were detected in trace quantities.

Keywords: anaerobic bacteria, cellulosic wastes, Clostridia sp, ethanol

Procedia PDF Downloads 286
560 Groundwater Geophysical Studies in the Developed and Sub-Urban BBMP Area, Bangalore, Karnataka, South India

Authors: G. Venkatesha, Urs Samarth, H. K. Ramaraju, Arun Kumar Sharma

Abstract:

The projection for Groundwater states that the total domestic water demand for greater Bangalore would increase from 1,170 MLD in 2010 to 1,336 MLD in 2016. Dependence on groundwater is ever increasing due to rapid Industrialization & Urbanization. It is estimated that almost 40% of the population of Bangalore is dependent on groundwater. Due to the unscientific disposal of domestic and industrial waste generated, groundwater is getting highly polluted in the city. The scale of this impact will depend mainly upon the water-service infrastructure, the superficial geology and the regional setting. The quality of ground water is equally important as that of quantity. Jointed and fractured granites and gneisses constitute the major aquifer system of BBMP area. Two new observatory Borewells were drilled and lithology report has been prepared. Petrographic Analysis (XRD/XRF) and Water quality Analysis were carried out as per the standard methods. Petrographic samples were analysed by collecting chip of rock from the borewell for every 20ft depth, most of the samples were similar and samples were identified as Biotite-Gneiss, Schistose Amphibolite. Water quality analysis was carried out for individual chemical parameters for two borewells drilled. 1st Borewell struck water at 150ft (Total depth-200ft) & 2nd struck at 740ft (Total depth-960ft). 5 water samples were collected till end of depth in each borewell. Chemical parameter values such as, Total Hardness (360-348, 280-320) mg/ltr, Nitrate (12.24-13.5, 45-48) mg/ltr, Chloride (104-90, 70-70)mg/ltr, Fe (0.75-0.09, 1.288-0.312)mg/ltr etc. are calculated respectively. Water samples were analysed from various parts of BBMP covering 750 sq kms, also thematic maps (IDW method) of water quality is generated for these samples for Post-Monsoon season. The study aims to explore the sub-surface Lithological layers and the thickness of weathered zone, which indirectly helps to know the Groundwater pollution source near surface water bodies, dug wells, etc. The above data are interpreted for future ground water resources planning and management.

Keywords: lithology, petrographic, pollution, urbanization

Procedia PDF Downloads 288
559 The Textual Criticism on the Age of ‘Wan Li’ Shipwreck Porcelain and Its Comparison with ‘Whitte Leeuw’ and Hatcher Shipwreck Porcelain

Authors: Yang Liu, Dongliang Lyu

Abstract:

After the Wan li shipwreck was discovered 60 miles off the east coast of Tan jong Jara in Malaysia, numerous marvelous ceramic shards have been salvaged from the seabed. Remarkable pieces of Jing dezhen blue-and-white porcelain recovered from the site represent the essential part of the fascinating research. The porcelain cargo of Wan li shipwreck is significant to the studies on exported porcelains and Jing dezhen porcelain manufacture industry of Late-Ming dynasty. Using the ceramic shards categorization and the study of the Chinese and Western historical documents as a research strategy, the paper wants to shed new light on the Wan li shipwreck wares classification with Jingdezhen kiln ceramic as its main focus. The article is also discussing Jing dezhen blue-and-white porcelains from the perspective of domestic versus export markets and further proceeding to the systematization and analyses of Wan li shipwreck porcelain which bears witness to the forms, styles, and types of decoration that were being traded in this period. The porcelain data from two other shipwrecked projects -White Leeuw and Hatcher- were chosen as comparative case studies and Wan li shipwreck Jing dezhen blue-and-white porcelain is being reinterpreted in the context of art history and archeology of the region. The marine archaeologist Sten Sjostrand named the ship ‘Wanli shipwreck’ because its porcelain cargoes are typical of those made during the reign of Emperor Wan li of Ming dynasty. Though some scholars question the appropriateness of the name, the final verdict of the history is still to be made. Based on previous historical argumentation, the article uses a comparative approach to review the Wan li shipwreck blue-and-white porcelains on the grounds of the porcelains unearthed from the tomb or abandoned in the towns and carrying the time-specific reign mark. All these materials provide a very strong evidence which suggests that the porcelain recovered from Wan li ship can be dated to as early as the second year of Tianqi era (1622) and early Chongzhen reign. Lastly, some blue-and-white porcelain intended for the domestic market and some bowls of blue-and-white porcelain from Jing dezhen kilns recovered from the Wan li shipwreck all carry at the bottom the specific residue from the firing process. The author makes the corresponding analysis for these two interesting phenomena.

Keywords: blue-and-white porcelain, Ming dynasty, Jing dezhen kiln, Wan li shipwreck

Procedia PDF Downloads 179
558 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 461
557 Equivalences and Contrasts in the Morphological Formation of Echo Words in Two Indo-Aryan Languages: Bengali and Odia

Authors: Subhanan Mandal, Bidisha Hore

Abstract:

The linguistic process whereby repetition of all or part of the base word with or without internal change before or after the base itself takes place is regarded as reduplication. The reduplicated morphological construction annotates with itself a new grammatical category and meaning. Reduplication is a very frequent and abundant phenomenon in the eastern Indian languages from the states of West Bengal and Odisha, i.e. Bengali and Odia respectively. Bengali, an Indo-Aryan language and a part of the Indo-European language family is one of the largest spoken languages in India and is the national language of Bangladesh. Despite this classification, Bengali has certain influences in terms of vocabulary and grammar due to its geographical proximity to Tibeto-Burman and Austro-Asiatic language speaking communities. Bengali along with Odia belonged to a single linguistic branch. But with time and gradual linguistic changes due to various factors, Odia was the first to break away and develop as a separate distinct language. However, less of contrasts and more of similarities still exist among these languages along the line of linguistics, leaving apart the script. This paper deals with the procedure of echo word formations in Bengali and Odia. The morphological research of the two languages concerning the field of reduplication reveals several linguistic processes. The revelation is based on the information elicited from native language speakers and also on the analysis of echo words found in discourse and conversational patterns. For the purpose of partial reduplication analysis, prefixed class and suffixed class word formations are taken into consideration which show specific rule based changes. For example, in suffixed class categorization, both consonant and vowel alterations are found, following the rules: i) CVx à tVX, ii) CVCV à CVCi. Further classifications were also found on sentential studies of both languages which revealed complete reduplication complexities while forming echo words where the head word lose its original meaning. Complexities based on onomatopoetic/phonetic imitation of natural phenomena and not according to any rule-based occurrences were also found. Taking these aspects into consideration which are very prevalent in both the languages, inferences are drawn from the study which bring out many similarities in both the languages in this area in spite of branching away from each other several years ago.

Keywords: consonant alteration, onomatopoetic, partial reduplication and complete reduplication, reduplication, vowel alteration

Procedia PDF Downloads 238
556 Assessment of Trace Metals Contamination in Surficial and Core Sediments from Ghannouch- Gabes Coastline, Impact of Phosphogypsum Discharge, Southeastern of Tunisia, Mediterranean Sea: Geochemical and Mineralogical Approaches

Authors: Rim Ben Amor, Myriam Abidi, Moncef Gueddari

Abstract:

The purpose of the present study is to assess the level and the distribution of CaO, SO3, Cd, Cu, Pb and Zn incore sediments of Ghannouch-Gabes coast, Gulf of Gabes, Tunisian Mediterranean coast. The XRD analyses indicate that the sediments of Ghannouch-Gabes coast are mainly composed of quartz, calcite, gypsum and fluorine reflecting the impact of the phosphate fertilizer industrial waste. The vertical distribution of surface sediments shows for all the elements analyzed, that the area located between the commercial and the fishing port of Gabes, is the most polluted zone, where the two harbors acted as barriers and limited the dispersion of phosphogypsum discharge. The abundance order of metals was found to be Zn > Cd > Cu >Pb and that the highest levels of heavy metals were found in the uppermost segment of the sediment core compared to lower depth subsurface due to a continuous input of PG release and showed that the area between the two harbor suffered from several types of pollutants compared to reference core C1, collected from non-industrialized area. The level of pollution was evaluated using contamination factor (Cf), pollution load index (PLI) and the geoaccumulation index (Igeo). The obtained results of Igeo allowed us to distinguish that the area between the commercial harbor of Ghannouch and the fishing harbor of Gabes is the most polluted where sediments are strongly contaminated for Pb, Cu and Cd. The pollution load index (PLI) of all sediments collected classified them as "polluted". According to contamination factor (Cf), the sediments can be considered as ‘considerable’ to ‘very high’ contaminated for Pb, ‘very high to moderate’ for Cd, ‘ moderate’ for Zn, between ‘moderate’ and ‘considerable’ for Cu. Statistical analyses show that heavy metals, fluoride, calcium and sulphate are resulting from the same anthropogenic origin. The metallic pollution status of sediments of Ghanouch -Gabes coast is worrying and requires a serious intervention.

Keywords: trace metals, phosphogypsum, core sediments, accumulation factor, contamination factor

Procedia PDF Downloads 137
555 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana

Authors: Jemima C. A. Sumboh

Abstract:

Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.

Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage

Procedia PDF Downloads 128
554 Rising Levels of Greenhouse Gases: Implication for Global Warming in Anambra State South Eastern Nigeria

Authors: Chikwelu Edward Emenike, Ogbuagu Uchenna Fredrick

Abstract:

About 34% of the solar radiant energy reaching the earth is immediately reflected back to space as incoming radiation by clouds, chemicals, dust in the atmosphere and by the earth’s surface. Most of the remaining 66% warms the atmosphere and land. Most of the incoming solar radiation not reflect away is degraded into low-quality heat and flows into space. The rate at which this energy returns to space as low-quality heat is affected by the presence of molecules of greenhouse gases. Gaseous emission was measured with the aid of Growen gas Analyzer with a digital readout. Total measurements of eight parameters of twelve selected sample locations taken at two different seasons within two months were made. The ambient air quality investigation in Anambra State has shown the overall mean concentrations of gaseous emission at twelve (12) locations. The mean gaseous emissions showed (NO2=0.66ppm, SO2=0.30ppm, CO=43.93ppm, H2S=2.17ppm, CH4=1.27ppm, CFC=1.59ppb, CO2=316.33ppm, N2O=302.67ppb and O3=0.37ppm). These values do not conform to the National Ambient Air Quality Standard (NAAQS) and thus contribute significantly to the global warming. Because some of these gaseous emissions (SO2, NO2) are oxidizing agents, they act as irritants that damage delicate tissues in the eyes and respiratory passages. These can impair lung function and trigger cardiovascular problems as the heart tries to compensate for lack of Oxygen by pumping faster and harder. The major sources of air pollution are transportation, industrial processes, stationary fuel combustion and solid waste disposal, thus much is yet to be done in a developing country like Nigeria. Air pollution control using pollution-control equipment to reduce the major conventional pollutants, relocating people who live very close to dumpsites, processing and treatment of gases to produce electricity, heat, fuel and various chemical components should be encouraged.

Keywords: ambient air, atmosphere, greenhouse gases, anambra state

Procedia PDF Downloads 422
553 Photovoltaic Solar Energy in Public Buildings: A Showcase for Society

Authors: Eliane Ferreira da Silva

Abstract:

This paper aims to mobilize and sensitize public administration leaders to good practices and encourage investment in the PV system in Brazil. It presents a case study methodology for dimensioning the PV system in the roofs of the public buildings of the Esplanade of the Ministries, Brasilia, capital of the country, with predefined resources, starting with the Sustainable Esplanade Project (SEP), of the exponential growth of photovoltaic solar energy in the world and making a comparison with the solar power plant of the Ministry of Mines and Energy (MME), active since: 6/10/2016. In order to do so, it was necessary to evaluate the energy efficiency of the buildings in the period from January 2016 to April 2017, (16 months) identifying the opportunities to reduce electric energy expenses, through the adjustment of contracted demand, the tariff framework and correction of existing active energy. The instrument used to collect data on electric bills was the e-SIC citizen information system. The study considered in addition to the technical and operational aspects, the historical, cultural, architectural and climatic aspects, involved by several actors. Identifying the reductions of expenses, the study directed to the following aspects: Case 1) economic feasibility for exchanges of common lamps, for LED lamps, and, Case 2) economic feasibility for the implementation of photovoltaic solar system connected to the grid. For the case 2, PV*SOL Premium Software was used to simulate several possibilities of photovoltaic panels, analyzing the best performance, according to local characteristics, such as solar orientation, latitude, annual average solar radiation. A simulation of an ideal photovoltaic solar system was made, with due calculations of its yield, to provide a compensation of the energy expenditure of the building - or part of it - through the use of the alternative source in question. The study develops a methodology for public administration, as a major consumer of electricity, to act in a responsible, fiscalizing and incentive way in reducing energy waste, and consequently reducing greenhouse gases.

Keywords: energy efficiency, esplanade of ministries, photovoltaic solar energy, public buildings, sustainable building

Procedia PDF Downloads 127
552 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh

Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter

Abstract:

Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.

Keywords: land cover change, land surface temperature, normalized difference vegetation index, urban heat island

Procedia PDF Downloads 268
551 Therapeutic Nihilism: Challenging Aging Diseases in Cameroon

Authors: Chick Loveline Ayoh Epse Ndi

Abstract:

Our cultural stance has deep implications for the psychological and physical well-being of the old. Cameroon is still rooted on the traditional belief that stipulates that; the aged are best catered for in the family setting where the children and grandchildren are supposed to give in return for services invested on them by the former. This is why up till date, there are no “Rest Homes” or “Convalescent hospitals” despite the rising challenges faced by the aged in this context. Beside the special measure set aside to cater for the aged, such as “Rest Homes” for the healthy, “Convalescent hospitals” are created set to cater for the health of the aged in the Western context with other facilities such as geriatric units. The health care practitioners are aware of aging diseases and have trained human resources like Gerontologists to cater for the aged and aging diseases. Meanwhile, in Africa and Cameroon in particular, such infrastructural and human resources are still to be considered in the health care system. It can be assumed that the aged and aging diseases are still to be considered in the health care system in this context. This is why we talk of therapeutic nihilism, where the aged are mixed up with other categories of patients with no special attention given to them. This qualitative study carried out in the Yaounde, the capital city of Cameroon, with their best referent hospitals, reveal that; the aged and aging diseases are still a myth in this context. Data collected in both private and public health institutions show that there is only one public institution in Cameroon that has a geriatric unit with no specialists. Patients who aretreated in this unit are considered as aged with terminal diseases that need palliative care and not intensive care. Cameroon is still lacking in terms of health care to the aged and ageing diseases. Like other patients, the aged are treated with a lot of laxity and no value. There is an emergency to create special health care units for geriatrics and and train gerontologist. Mentally or physically ill aged faced medical rational with psychodynamic treatment considered as waste of time. The aged are less likely to be regarded salvageable when they enter a hospital in serious conditions due to the lack of specialists and geriatric units for them. The implication of this study is to sensitization the stake holders for an urgent need to extend special care units for the aged and aging diseases in this context.

Keywords: challeng, therapy, agtng, diseases cameroon

Procedia PDF Downloads 89
550 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 400
549 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 401
548 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy

Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren

Abstract:

Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.

Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment

Procedia PDF Downloads 25
547 Effect of Windrow Management on Ammonia and Nitrous Oxide Emissions from Swine Manure Composting

Authors: Nanh Lovanh, John Loughrin, Kimberly Cook, Phil Silva, Byung-Taek Oh

Abstract:

In the era of sustainability, utilization of livestock wastes as soil amendment to provide micronutrients for crops is very economical and sustainable. It is well understood that livestock wastes are comparable, if not better, nutrient sources for crops as chemical fertilizers. However, the large concentrated volumes of animal manure produced from livestock operations and the limited amount of available nearby agricultural land areas necessitated the need for volume reduction of these animal wastes. Composting of these animal manures is a viable option for biomass and pathogenic reduction in the environment. Nevertheless, composting also increases the potential loss of available nutrients for crop production as well as unwanted emission of anthropogenic air pollutants due to the loss of ammonia and other compounds via volatilization. In this study, we examine the emission of ammonia and nitrous oxide from swine manure windrows to evaluate the benefit of biomass reduction in conjunction with the potential loss of available nutrients. The feedstock for the windrows was obtained from swine farm in Kentucky where swine manure was mixed with wood shaving as absorbent material. Static flux chambers along with photoacoustic gas analyzer were used to monitor ammonia and nitrous oxide concentrations during the composting process. The results show that ammonia and nitrous oxide fluxes were quite high during the initial composting process and after the turning of each compost pile. Over the period of roughly three months of composting, the biochemical oxygen demand (BOD) decreased by about 90%. Although composting of animal waste is quite beneficial for biomass reduction, composting may not be economically feasible from an agronomical point of view due to time, nutrient loss (N loss), and potential environmental pollution (ammonia and greenhouse gas emissions). Therefore, additional studies are needed to assess and validate the economics and environmental impact of animal (swine) manure composting (e.g., crop yield or impact on climate change).

Keywords: windrow, swine manure, ammonia, nitrous oxide, fluxes, management

Procedia PDF Downloads 353
546 Impure Water, a Future Disaster: A Case Study of Lahore Ground Water Quality with GIS Techniques

Authors: Rana Waqar Aslam, Urooj Saeed, Hammad Mehmood, Hameed Ullah, Imtiaz Younas

Abstract:

This research has been conducted to assess the water quality in and around Lahore Metropolitan area on the basis of three different land uses, i.e. residential, commercial, and industrial land uses. For this, 29 sample sites have been selected on the basis of simple random sampling technique. Samples were collected at the source (WASA tube wells). The criteria for selecting sample sites are to have a maximum concentration of population in the selected land uses. The results showed that in the residential land use the proportion of nitrate and turbidity is at their highest level in the areas of Allama Iqbal Town and Samanabad Town. Commercial land use of Gulberg and Data Gunj Bakhsh Town have highest level of proportion of chlorides, calcium, TDS, pH, Mg, total hardness, arsenic and alkalinity. Whereas in industrial type of land use in Ravi and Wahga Town have the proportion of arsenic, Mg, nitrate, pH, and turbidity are at their highest level. The high rate of concentration of these parameters in these areas is basically due to the old and fractured pipelines that allow bacterial as well as physiochemical contaminants to contaminate the portable water at the sources. Furthermore, it is seen in most areas that waste water from domestic, industrial, as well as municipal sources may get easy discharge into open spaces and water bodies, like, cannels, rivers, lakes that seeps and become a part of ground water. In addition, huge dumps located in Lahore are becoming the cause of ground water contamination as when the rain falls, the water gets seep into the ground and impures the ground water quality. On the basis of the derived results with the help of Geo-spatial technology ACRGIS 9.3 Interpolation (IDW), it is recommended that water filtration plants must be installed with specific parameter control. A separate team for proper inspection has to be made for water quality check at the source. Old water pipelines must be replaced with the new pipelines, and safe water depth must be ensured at the source end.

Keywords: GIS, remote sensing, pH, nitrate, disaster, IDW

Procedia PDF Downloads 221
545 Preparation and Properties of Chloroacetated Natural Rubber Rubber Foam Using Corn Starch as Curing Agent

Authors: Ploenpit Boochathum, Pitchayanad Kaolim, Phimjutha Srisangkaew

Abstract:

In general, rubber foam is produced based on the sulfur curing system. However, the remaining sulfur in the rubber product waste is burned to sulfur dioxide gas causing the environment pollution. To avoid using sulfur as curing agent in the rubber foam products, this research work proposes non-sulfur curing system by using corn starch as a curing agent. The ether crosslinks were proposed to be produced via the functional bonding between hydroxyl groups of the starch molecules and chloroacetate groups added on the natural rubber molecules. The chloroacetated natural rubber (CNR) latex was prepared via the epoxidation reaction of the concentrated natural rubber latex, subsequently, epoxy rings were attacked by chloroacetic acid to produce hydroxyl groups and chloroacetate groups on the rubber molecules. Foaming agent namely NaHCO3 was selected to add in the CNR latex due to the low decomposition temperature at about 50°C. The appropriate curing temperature was assigned to be 90°C that is above gelatinization temperature; 60-70°C, of starch. The effect of weight ratio of starch, i.e., 0 phr, 3 phr and 5 phr, on the physical properties of CNR rubber foam was investigated. It was found that density reduced from 0.81 g/cm3 for 0 phr to 0.75 g/cm3 for 3 phr and 0.79 g/cm3 for 5 phr. The ability to return to its original thickness after prolonged compressive stresses of CNR rubber foam cured with starch loading of 5 phr was found to be considerably better than that of CNR rubber foam cured with starch 3 phr and CNR rubber foam without addition of starch according to the compression set that was determined to decrease from 66.67% to 40% and 26.67% with the increase loading of starch. The mechanical properties including tensile strength and modulus of CNR rubber foams cured using starch were determined to increase except that the elongation at break was found to decrease. In addition, all mechanical properties of CNR rubber foams cured with the starch 3 phr and 5 phr were found to be slightly different and drastically higher than those of CNR rubber foam without the addition of starch. This research work indicates that starch can be applicable as a curing agent for CNR rubber. This is confirmed by the increase of the elastic modulus (G') of CNR rubber foams that was cured with the starch over the CNR rubber foam without curing agent. This type of rubber foam is believed to be one of the biodegradable and environment-friendly product that can be cured at low temperature of 90°C.

Keywords: chloroacetated natural rubber, corn starch, non-sulfur curing system, rubber foam

Procedia PDF Downloads 308
544 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 121
543 Saccharification and Bioethanol Production from Banana Pseudostem

Authors: Elias L. Souza, Noeli Sellin, Cintia Marangoni, Ozair Souza

Abstract:

Among the different forms of reuse and recovery of agro-residual waste is the production of biofuels. The production of second-generation ethanol has been evaluated and proposed as one of the technically viable alternatives for this purpose. This research work employed the banana pseudostem as biomass. Two different chemical pre-treatment methods (acid hydrolisis with H2SO4 2% w/w and alkaline hydrolysis with NaOH 3% w/w) of dry and milled biomass (70 g/L of dry matter, ms) were assessed, and the corresponding reducing sugars yield, AR, (YAR), after enzymatic saccharification, were determined. The effect on YAR by increasing the dry matter (ms) from 70 to 100 g/L, in dry and milled biomass and also fresh, were analyzed. Changes in cellulose crystallinity and in biomass surface morphology due to the different chemical pre-treatments were analyzed by X-ray diffraction and scanning electron microscopy. The acid pre-treatment resulted in higher YAR values, whether related to the cellulose content under saccharification (RAR = 79,48) or to the biomass concentration employed (YAR/ms = 32,8%). In a comparison between alkaline and acid pre-treatments, the latter led to an increase in the cellulose content of the reaction mixture from 52,8 to 59,8%; also, to a reduction of the cellulose crystallinity index from 51,19 to 33,34% and increases in RAR (43,1%) and YAR/ms (39,5%). The increase of dry matter (ms) bran from 70 to 100 g/L in the acid pre-treatment, resulted in a decrease of average yields in RAR (43,1%) and YAR/ms (18,2%). Using the pseudostem fresh with broth removed, whether for 70 g/L concentration or 100 g/L in dry matter (ms), similarly to the alkaline pre-treatment, has led to lower average values in RAR (67,2% and 42,2%) and in YAR/ms (28,4% e 17,8%), respectively. The acid pre-treated and saccharificated biomass broth was detoxificated with different activated carbon contents (1,2 and 4% w/v), concentrated up to AR = 100 g/L and fermented by Saccharomyces cerevisiae. The yield values (YP/AR) and productivity (QP) in ethanol were determined and compared to those values obtained from the fermentation of non-concentrated/non-detoxificated broth (AR = 18 g/L) and concentrated/non-detoxificated broth (AR = 100 g/L). The highest average value for YP/AR (0,46 g/g) was obtained from the fermentation of non-concentrated broth. This value did not present a significant difference (p<0,05) when compared to the YP/RS related to the broth concentrated and detoxificated by activated carbon 1% w/v (YP/AR = 0,41 g/g). However, a higher ethanol productivity (QP = 1,44 g/L.h) was achieved through broth detoxification. This value was 75% higher than the average QP determined using concentrated and non-detoxificated broth (QP = 0,82 g/L.h), and 22% higher than the QP found in the non-concentrated broth (QP = 1,18 g/L.h).

Keywords: biofuels, biomass, saccharification, bioethanol

Procedia PDF Downloads 340
542 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation

Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov

Abstract:

Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.

Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery

Procedia PDF Downloads 106
541 Phytoremediation of Arsenic-Contaminated Soil and Recovery of Valuable Arsenic Products

Authors: Valentine C. Eze, Adam P. Harvey

Abstract:

Contamination of groundwater and soil by heavy metals and metalloids through anthropogenic activities and natural occurrence poses serious environmental challenges globally. A possible solution to this problem is through phytoremediation of the contaminants using hyper-accumulating plants. Conventional phytoremediation treats the contaminated hyper-accumulator biomass as a waste stream which adds no value to the heavy metal(loid)s decontamination process. This study investigates strategies for remediation of soil contaminated with arsenic and the extractive chemical routes for recovery of arsenic and phosphorus from the hyper-accumulator biomass. Pteris cretica ferns species were investigated for their uptake of arsenic from soil containing 200 ± 3ppm of arsenic. The Pteris cretica ferns were shown to be capable of hyper-accumulation of arsenic, with maximum accumulations of about 4427 ± 79mg to 4875 ± 96mg of As per kg of the dry ferns. The arsenic in the Pteris cretica fronds was extracted into various solvents, with extraction efficiencies of 94.3 ± 2.1% for ethanol-water (1:1 v/v), 81.5 ± 3.2% for 1:1(v/v) methanol-water, and 70.8 ± 2.9% for water alone. The recovery efficiency of arsenic from the molybdic acid complex process 90.8 ± 5.3%. Phosphorus was also recovered from the molybdic acid complex process at 95.1 ± 4.6% efficiency. Quantitative precipitation of Mg₃(AsO₄)₂ and Mg₃(PO₄)₂ occurred in the treatment of the aqueous solutions of arsenic and phosphorus after stripping at pH of 8 – 10. The amounts of Mg₃(AsO₄)₂ and Mg₃(PO₄)₂ obtained were 96 ± 7.2% for arsenic and 94 ± 3.4% for phosphorus. The arsenic nanoparticles produced from the Mg₃(AsO₄)₂ recovered from the biomass have the average particles diameter of 45.5 ± 11.3nm. A two-stage reduction process – a first step pre-reduction of As(V) to As(III) with L-cysteine, followed by NaBH₄ reduction of the As(III) to As(0), was required to produced arsenic nanoparticles from the Mg₃(AsO₄)₂. The arsenic nanoparticles obtained are potentially valuable for medical applications, while the Mg₃(AsO₄)₂ could be used as an insecticide. The phosphorus contents of the Pteris cretica biomass was recovered as phosphomolybdic acid complex and converted to Mg₃(PO₄)₂, which could be useful in productions of fertilizer. Recovery of these valuable products from phytoremediation biomass would incentivize and drive commercial industries’ participation in remediation of contaminated lands.

Keywords: phytoremediation, Pteris cretica, hyper-accumulator, solvent extraction, molybdic acid process, arsenic nanoparticles

Procedia PDF Downloads 314
540 Lipid Extraction from Microbial Cell by Electroporation Technique and Its Influence on Direct Transesterification for Biodiesel Synthesis

Authors: Abu Yousuf, Maksudur Rahman Khan, Ahasanul Karim, Amirul Islam, Minhaj Uddin Monir, Sharmin Sultana, Domenico Pirozzi

Abstract:

Traditional biodiesel feedstock like edible oils or plant oils, animal fats and cooking waste oil have been replaced by microbial oil in recent research of biodiesel synthesis. The well-known community of microbial oil producers includes microalgae, oleaginous yeast and seaweeds. Conventional transesterification of microbial oil to produce biodiesel is lethargic, energy consuming, cost-ineffective and environmentally unhealthy. This process follows several steps such as microbial biomass drying, cell disruption, oil extraction, solvent recovery, oil separation and transesterification. Therefore, direct transesterification of biodiesel synthesis has been studying for last few years. It combines all the steps in a single reactor and it eliminates the steps of biomass drying, oil extraction and separation from solvent. Apparently, it seems to be cost-effective and faster process but number of difficulties need to be solved to make it large scale applicable. The main challenges are microbial cell disruption in bulk volume and make faster the esterification reaction, because water contents of the medium sluggish the reaction rate. Several methods have been proposed but none of them is up to the level to implement in large scale. It is still a great challenge to extract maximum lipid from microbial cells (yeast, fungi, algae) investing minimum energy. Electroporation technique results a significant increase in cell conductivity and permeability caused due to the application of an external electric field. Electroporation is required to alter the size and structure of the cells to increase their porosity as well as to disrupt the microbial cell walls within few seconds to leak out the intracellular lipid to the solution. Therefore, incorporation of electroporation techniques contributed in direct transesterification of microbial lipids by increasing the efficiency of biodiesel production rate.

Keywords: biodiesel, electroporation, microbial lipids, transesterification

Procedia PDF Downloads 275
539 Correlation Between the Toxicity Grade of the Adverse Effects in the Course of the Immunotherapy of Lung Cancer and Efficiency of the Treatment in Anti-PD-L1 and Anti-PD-1 Drugs - Own Clinical Experience

Authors: Anna Rudzińska, Katarzyna Szklener, Pola Juchaniuk, Anna Rodzajweska, Katarzyna Machulska-Ciuraj, Monika Rychlik- Grabowska, Michał łOziński, Agnieszka Kolak-Bruks, SłAwomir Mańdziuk

Abstract:

Introduction: Immune checkpoint inhibition (ICI) belongs to the modern forms of anti-cancer treatment. Due to the constant development and continuous research in the field of ICI, many aspects of the treatment are yet to be discovered. One of the less researched aspects of ICI treatment is the influence of the adverse effects on the treatment success rate. It is suspected that adverse events in the course of the ICI treatment indicate a better response rate and correlate with longer progression-free- survival. Methodology: The research was conducted with the usage of the documentation of the Department of Clinical Oncology and Chemotherapy. Data of the patients with a lung cancer diagnosis who were treated between 2019-2022 and received ICI treatment were analyzed. Results: Out of over 133 patients whose data was analyzed, the vast majority were diagnosed with non-small cell lung cancer. The majority of the patients did not experience adverse effects. Most adverse effects reported were classified as grade 1 or grade 2 according to CTCAE classification. Most adverse effects involved skin, thyroid and liver toxicity. Statistical significance was found for the adverse effect incidence and overall survival (OS) and progression-free survival (PFS) (p=0,0263) and for the time of toxicity onset and OS and PFS (p<0,001). The number of toxicity sites was statistically significant for prolonged PFS (p=0.0315). The highest OS was noted in the group presenting grade 1 and grade 2 adverse effects. Conclusions: Obtained results confirm the existence of the prolonged OS and PFS in the adverse-effects-charged patients, mostly in the group presenting mild to intermediate (Grade 1 and Grade 2) adverse effects and late toxicity onset. Simultaneously our results suggest a correlation between treatment response rate and the toxicity grade of the adverse effects and the time of the toxicity onset. Similar results were obtained in several similar research conducted - with the proven tendency of better survival in mild and moderate toxicity; meanwhile, other studies in the area suggested an advantage in patients with any toxicity regardless of the grade. The contradictory results strongly suggest the need for further research on this topic, with a focus on additional factors influencing the course of the treatment.

Keywords: adverse effects, immunotherapy, lung cancer, PD-1/PD-L1 inhibitors

Procedia PDF Downloads 83
538 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 95
537 Design and Development of an Autonomous Beach Cleaning Vehicle

Authors: Mahdi Allaoua Seklab, Süleyman BaşTürk

Abstract:

In the quest to enhance coastal environmental health, this study introduces a fully autonomous beach cleaning machine, a breakthrough in leveraging green energy and advanced artificial intelligence for ecological preservation. Designed to operate independently, the machine is propelled by a solar-powered system, underscoring a commitment to sustainability and the use of renewable energy in autonomous robotics. The vehicle's autonomous navigation is achieved through a sophisticated integration of LIDAR and a camera system, utilizing an SSD MobileNet V2 object detection model for accurate and real-time trash identification. The SSD framework, renowned for its efficiency in detecting objects in various scenarios, is coupled with the lightweight and precise highly MobileNet V2 architecture, making it particularly suited for the computational constraints of on-board processing in mobile robotics. Training of the SSD MobileNet V2 model was conducted on Google Colab, harnessing cloud-based GPU resources to facilitate a rapid and cost-effective learning process. The model was refined with an extensive dataset of annotated beach debris, optimizing the parameters using the Adam optimizer and a cross-entropy loss function to achieve high-precision trash detection. This capability allows the machine to intelligently categorize and target waste, leading to more effective cleaning operations. This paper details the design and functionality of the beach cleaning machine, emphasizing its autonomous operational capabilities and the novel application of AI in environmental robotics. The results showcase the potential of such technology to fill existing gaps in beach maintenance, offering a scalable and eco-friendly solution to the growing problem of coastal pollution. The deployment of this machine represents a significant advancement in the field, setting a new standard for the integration of autonomous systems in the service of environmental stewardship.

Keywords: autonomous beach cleaning machine, renewable energy systems, coastal management, environmental robotics

Procedia PDF Downloads 13
536 Approach for Evaluating Wastewater Reuse Options in Agriculture

Authors: Manal Elgallal, Louise Fletcher, Barbara Evans

Abstract:

Water scarcity is a growing concern in many arid and semi-arid countries. The increase of water scarcity threatens economic development and sustainability of human livelihoods as well as environment especially in developing countries. Globally, agriculture is the largest water consumption sector, accounting for approximately 70% of all freshwater extraction. Growing competition between the agricultural and higher economic value in urban and industrial uses of high-quality freshwater supplies, especially in regions where water scarcity major problems, will increase the pressure on this precious resource. In this circumstance, wastewater may provide reliable source of water for agriculture and enable freshwater to be exchanged for more economically valuable purposes. Concern regarding the risks from microbial and toxic components to human health and environment quality is a serious obstacle for wastewater reuse particularly in agriculture. Although powerful approaches and tools for microbial risk assessment and management for safe use of wastewater are now available, few studies have attempted to provide any mechanism to quantitatively assess and manage the environmental risks resulting from reusing wastewater. In seeking pragmatic solutions to sustainable wastewater reuse, there remains a lack of research incorporating both health and environmental risk assessment and management with economic analysis in order to quantitatively combine cost, benefits and risks to rank alternative reuse options. This study seeks to enhance effective reuse of wastewater for irrigation in arid and semi-arid areas, the outcome of the study is an evaluation approach that can be used to assess different reuse strategies and to determine the suitable scale at which treatment alternatives and interventions are possible, feasible and cost effective in order to optimise the trade-offs between risks to protect public health and the environment and preserving the substantial benefits.

Keywords: environmental risks, management, life cycle costs, waste water irrigation

Procedia PDF Downloads 257
535 The Extension of the Kano Model by the Concept of Over-Service

Authors: Lou-Hon Sun, Yu-Ming Chiu, Chen-Wei Tao, Chia-Yun Tsai

Abstract:

It is common practice for many companies to ask employees to provide heart-touching service for customers and to emphasize the attitude of 'customer first'. However, services may not necessarily gain praise, and may actually be considered excessive, if customers do not appreciate such behaviors. In reality, many restaurant businesses try to provide as much service as possible without taking into account whether over-provision may lead to negative customer reception. A survey of 894 people in Britain revealed that 49 percent of respondents consider over-attentive waiters the most annoying aspect of dining out. It can be seen that merely aiming to exceed customers’ expectations without actually addressing their needs, only further distances and dissociates the standard of services from the goals of customer satisfaction itself. Over-service is defined, as 'service provided that exceeds customer expectations, or simply that customers deemed redundant, resulting in negative perception'. It was found that customers’ reactions and complaints concerning over-service are not as intense as those against service failures caused by the inability to meet expectations; consequently, it is more difficult for managers to become aware of the existence of over-service. Thus the ability to manage over-service behaviors is a significant topic for consideration. The Kano model classifies customer preferences into five categories: attractive quality attribute, one-dimensional quality attribute, must-be quality attribute, indifferent quality attribute and reverse quality attributes. The model is still very popular for researchers to explore the quality aspects and customer satisfaction. Nevertheless, several studies indicated that Kano’s model could not fully capture the nature of service quality. The concept of over-service can be used to restructure the model and provide a better understanding of the service quality construct. In this research, the structure of Kano's two-dimensional questionnaire will be used to classify the factors into different dimensions. The same questions will be used in the second questionnaire for identifying the over-service experienced of the respondents. The finding of these two questionnaires will be used to analyze the relevance between service quality classification and over-service behaviors. The subjects of this research are customers of fine dining chain restaurants. Three hundred questionnaires will be issued based on the stratified random sampling method. Items for measurement will be derived from DINESERV scale. The tangible dimension of the questionnaire will be eliminated due to this research is focused on the employee behaviors. Quality attributes of the Kano model are often regarded as an instrument for improving customer satisfaction. The concept of over-service can be used to restructure the model and provide a better understanding of service quality construct. The extension of the Kano model will not only develop a better understanding of customer needs and expectations but also enhance the management of service quality.

Keywords: consumer satisfaction, DINESERV, kano model, over-service

Procedia PDF Downloads 158
534 Exploratory Analysis and Development of Sustainable Lean Six Sigma Methodologies Integration for Effective Operation and Risk Mitigation in Manufacturing Sectors

Authors: Chukwumeka Daniel Ezeliora

Abstract:

The Nigerian manufacturing sector plays a pivotal role in the country's economic growth and development. However, it faces numerous challenges, including operational inefficiencies and inherent risks that hinder its sustainable growth. This research aims to address these challenges by exploring the integration of Lean and Six Sigma methodologies into the manufacturing processes, ultimately enhancing operational effectiveness and risk mitigation. The core of this research involves the development of a sustainable Lean Six Sigma framework tailored to the specific needs and challenges of Nigeria's manufacturing environment. This framework aims to streamline processes, reduce waste, improve product quality, and enhance overall operational efficiency. It incorporates principles of sustainability to ensure that the proposed methodologies align with environmental and social responsibility goals. To validate the effectiveness of the integrated Lean Six Sigma approach, case studies and real-world applications within select manufacturing companies in Nigeria will be conducted. Data were collected to measure the impact of the integration on key performance indicators, such as production efficiency, defect reduction, and risk mitigation. The findings from this research provide valuable insights and practical recommendations for selected manufacturing companies in South East Nigeria. By adopting sustainable Lean Six Sigma methodologies, these organizations can optimize their operations, reduce operational risks, improve product quality, and enhance their competitiveness in the global market. In conclusion, this research aims to bridge the gap between theory and practice by developing a comprehensive framework for the integration of Lean and Six Sigma methodologies in Nigeria's manufacturing sector. This integration is envisioned to contribute significantly to the sector's sustainable growth, improved operational efficiency, and effective risk mitigation strategies, ultimately benefiting the Nigerian economy as a whole.

Keywords: lean six sigma, manufacturing, risk mitigation, sustainability, operational efficiency

Procedia PDF Downloads 197