Search results for: efficient crow search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9579

Search results for: efficient crow search algorithm

519 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 313
518 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique

Authors: Stefano Iannello, Massimiliano Materazzi

Abstract:

Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.

Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray

Procedia PDF Downloads 172
517 Deforestation, Vulnerability and Adaptation Strategies of Rural Farmers: The Case of Central Rift Valley Region of Ethiopia

Authors: Dembel Bonta Gebeyehu

Abstract:

In the study area, the impacts of deforestation for environmental degradation and livelihood of farmers manifest in different faces. They are more vulnerable as they depend on rain-fed agriculture and immediate natural forests. On the other hand, after planting seedling, waste disposal and management system of the plastic cover is poorly practiced and administered in the country in general and in the study area in particular. If this situation continues, the plastic waste would also accentuate land degradation. Besides, there is the absence of empirical studies conducted comprehensively on the research under study the case. The results of the study could suffice to inform any intervention schemes or to contribute to the existing knowledge on these issues. The study employed a qualitative approach based on intensive fieldwork data collected via various tools namely open-ended interviews, focus group discussion, key-informant interview and non-participant observation. The collected data was duly transcribed and latter categorized into different labels based on pre-determined themes to make further analysis. The major causes of deforestation were the expansion of agricultural land, poor administration, population growth, and the absence of conservation methods. The farmers are vulnerable to soil erosion and soil infertility culminating in low agricultural production; loss of grazing land and decline of livestock production; climate change; and deterioration of social capital. Their adaptation and coping strategies include natural conservation measures, diversification of income sources, safety-net program, and migration. Due to participatory natural resource conservation measures, soil erosion has been decreased and protected, indigenous woodlands started to regenerate. These brought farmers’ attitudinal change. The existing forestation program has many flaws. Especially, after planting seedlings, there is no mechanism for the plastic waste disposal and management. It was also found out organizational challenges among the mandated offices In the studied area, deforestation is aggravated by a number of factors, which made the farmers vulnerable. The current forestation programs are not well-planned, implemented, and coordinated. Sustainable and efficient seedling plastic cover collection and reuse methods should be devised. This is possible through creating awareness, organizing micro and small enterprises to reuse, and generate income from the collected plastic etc.

Keywords: land-cover and land-dynamics, vulnerability, adaptation strategy, mitigation strategies, sustainable plastic waste management

Procedia PDF Downloads 388
516 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 76
515 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder

Authors: Kseniya Gladun

Abstract:

Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").

Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography

Procedia PDF Downloads 250
514 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 33
513 Experimental Study of Vibration Isolators Made of Expanded Cork Agglomerate

Authors: S. Dias, A. Tadeu, J. Antonio, F. Pedro, C. Serra

Abstract:

The goal of the present work is to experimentally evaluate the feasibility of using vibration isolators made of expanded cork agglomerate. Even though this material, also known as insulation cork board (ICB), has mainly been studied for thermal and acoustic insulation purposes, it has strong potential for use in vibration isolation. However, the adequate design of expanded cork blocks vibration isolators will depend on several factors, such as excitation frequency, static load conditions and intrinsic dynamic behavior of the material. In this study, transmissibility tests for different static and dynamic loading conditions were performed in order to characterize the material. Since the material’s physical properties can influence the vibro-isolation performance of the blocks (in terms of density and thickness), this study covered four mass density ranges and four block thicknesses. A total of 72 expanded cork agglomerate specimens were tested. The test apparatus comprises a vibration exciter connected to an excitation mass that holds the test specimen. The test specimens under characterization were loaded successively with steel plates in order to obtain results for different masses. An accelerometer was placed at the top of these masses and at the base of the excitation mass. The test was performed for a defined frequency range, and the amplitude registered by the accelerometers was recorded in time domain. For each of the signals (signal 1- vibration of the excitation mass, signal 2- vibration of the loading mass) a fast Fourier transform (FFT) was applied in order to obtain the frequency domain response. For each of the frequency domain signals, the maximum amplitude reached was registered. The ratio between the amplitude (acceleration) of signal 2 and the amplitude of signal 1, allows the calculation of the transmissibility for each frequency. Repeating this procedure allowed us to plot a transmissibility curve for a certain frequency range. A number of transmissibility experiments were performed to assess the influence of changing the mass density and thickness of the expanded cork blocks and the experimental conditions (static load and frequency of excitation). The experimental transmissibility tests performed in this study showed that expanded cork agglomerate blocks are a good option for mitigating vibrations. It was concluded that specimens with lower mass density and larger thickness lead to better performance, with higher vibration isolation and a larger range of isolated frequencies. In conclusion, the study of the performance of expanded cork agglomerate blocks presented herein will allow for a more efficient application of expanded cork vibration isolators. This is particularly relevant since this material is a more sustainable alternative to other commonly used non-environmentally friendly products, such as rubber.

Keywords: expanded cork agglomerate, insulation cork board, transmissibility tests, sustainable materials, vibration isolators

Procedia PDF Downloads 332
512 Open Space Use in University Campuses with User Requirements Analysis: The Case of Eskişehir Osmangazi University Meşelik Campus

Authors: Aysen Celen Ozturk, Hatice Dulger

Abstract:

University may be defined as a teaching institution consisting of faculties, institutes, colleges, and units that have undergraduate and graduate education, scientific research and publications. It has scientific autonomy and public legal personality. Today, universities are not only the institutions in which students and lecturers experience education, training and scientific work. They also offer social, cultural and artistic activities that strengthen the link with the city. This also incorporates all city users into the campus borders. Thus, universities contribute to social and individual development of the country by providing science, art, socio-cultural development, communication and socialization with people of different cultural and social backgrounds. Moreover, universities provide an active social life, where the young population is the majority. This enables the sense of belonging to the users to develop, to increase the interaction between academicians and students, and to increase the learning / producing community by continuing academic sharing environments outside the classrooms. For this reason, besides academic spaces in university campuses, the users also need closed and open spaces where they can socialize, spend time together and relax. Public open spaces are the most important social spaces that individuals meet, express themselves and share. Individuals belonging to different socio-cultural structures and ethnic groups maintain their social experiences with the physical environment they are in, the outdoors, and their actions and sharing in these spaces. While university campuses are being designed for their individual and social development roles, user needs must be determined correctly and design should be realized in this direction. While considering that requirements may change over time, user satisfaction should be questioned at certain periods and new arrangements should be made in existing applications in the direction of current demands. This study aims to determine the user requirements through the case of Eskişehir Osmangazi University, Meşelik Campus / Turkey. Post Occupancy Evaluation (POE) questionnaire, cognitive mapping and deep interview methods are used in the research process. All these methods show that the students, academicians and other officials in the Meşelik Campus of Eskişehir Osmangazi University find way finding elements insufficient and are in need of efficient landscape design and social spaces. This study is important in terms of determining the needs of the users as a design input. This will help improving the quality of common space in Eskişehir Osmangazi University and in other similar universities.

Keywords: university campuses, public open space, user requirement, post occupancy evaluation

Procedia PDF Downloads 243
511 Development and Characterization of Topical 5-Fluorouracil Solid Lipid Nanoparticles for the Effective Treatment of Non-Melanoma Skin Cancer

Authors: Sudhir Kumar, V. R. Sinha

Abstract:

Background: The topical and systemic toxicity associated with present nonmelanoma skin cancer (NMSC) treatment therapy using 5-Fluorouracil (5-FU) make it necessary to develop a novel delivery system having lesser toxicity and better control over drug release. Solid lipid nanoparticles offer many advantages like: controlled and localized release of entrapped actives, nontoxicity, and better tolerance. Aim:-To investigate safety and efficacy of 5-FU loaded solid lipid nanoparticles as a topical delivery system for the treatment of nonmelanoma skin cancer. Method: Topical solid lipid nanoparticles of 5-FU were prepared using Compritol 888 ATO (Glyceryl behenate) as lipid component and pluronic F68 (Poloxamer 188), Tween 80 (Polysorbate 80), Tyloxapol (4-(1,1,3,3-Tetramethylbutyl) phenol polymer with formaldehyde and oxirane) as surfactants. The SLNs were prepared with emulsification method. Different formulation parameters viz. type and ratio of surfactant, ratio of lipid and ratio of surfactant:lipid were investigated on particle size and drug entrapment efficiency. Results: Characterization of SLNs like–Transmission Electron Microscopy (TEM), Differential Scannig calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), Particle size determination, Polydispersity index, Entrapment efficiency, Drug loading, ex vivo skin permeation and skin retention studies, skin irritation and histopathology studies were performed. TEM results showed that shape of SLNs was spherical with size range 200-500nm. Higher encapsulation efficiency was obtained for batches having higher concentration of surfactant and lipid. It was found maximum 64.3% for SLN-6 batch with size of 400.1±9.22 nm and PDI 0.221±0.031. Optimized SLN batches and marketed 5-FU cream were compared for flux across rat skin and skin drug retention. The lesser flux and higher skin retention was obtained for SLN formulation in comparison to topical 5-FU cream, which ensures less systemic toxicity and better control of drug release across skin. Chronic skin irritation studies lacks serious erythema or inflammation and histopathology studies showed no significant change in physiology of epidermal layers of rat skin. So, these studies suggest that the optimized SLN formulation is efficient then marketed cream and safer for long term NMSC treatment regimens. Conclusion: Topical and systemic toxicity associated with long-term use of 5-FU, in the treatment of NMSC, can be minimized with its controlled release with significant drug retention with minimal flux across skin. The study may provide a better alternate for effective NMSC treatment.

Keywords: 5-FU, topical formulation, solid lipid nanoparticles, non melanoma skin cancer

Procedia PDF Downloads 516
510 Isosorbide Bis-Methyl Carbonate: Opportunities for an Industrial Model Based on Biomass

Authors: Olga Gomez De Miranda, Jose R. Ochoa-Gomez, Stefaan De Wildeman, Luciano Monsegue, Soraya Prieto, Leire Lorenzo, Cristina Dineiro

Abstract:

The chemical industry is facing a new revolution. As long as processes based on the exploitation of fossil resources emerged with force in the XIX century, Society currently demands a new radical change that will lead to the complete and irreversible implementation of a circular sustainable economic model. The implementation of biorefineries will be essential for this. There, renewable raw materials as sugars and other biomass resources are exploited for the development of new materials that will partially replace their petroleum-derived homologs in a safer, and environmentally more benign approach. Isosorbide, (1,4:3,6-dianhydro-d-glucidol) is a primary bio-based derivative obtained from the plant (poly) saccharides and a very interesting example of a useful chemical produced in biorefineries. It can, in turn, be converted to other secondary monomers as isosorbide bis-methyl carbonate (IBMC), whose main field of application can be as a key biodegradable intermediary substitute of bisphenol-A in the manufacture of polycarbonates, or as an alternative to the toxic isocyanates in the synthesis of new polyurethanes (non-isocyanate polyurethanes) both with a huge application market. New products will present advantageous mechanical or optical properties, as well as improved behavior in non-toxicity and biodegradability aspects in comparison to their petro-derived alternatives. A robust production process of IBMC, a biomass-derived chemical, is here presented. It can be used with different raw material qualities using dimethyl carbonate (DMC) as both co-reactant and solvent. It consists of the transesterification of isosorbide with DMC under soft operational conditions, using different basic catalysts, always active with the isosorbide characteristics and purity. Appropriate isolation processes have been also developed to obtain crude IBMC yields higher than 90%, with oligomers production lower than 10%, independently of the quality of the isosorbide considered. All of them are suitable to be used in polycondensation reactions for polymers obtaining. If higher qualities of IBMC are needed, a purification treatment based on nanofiltration membranes has been also developed. The IBMC reaction-isolation conditions established in the laboratory have been successfully modeled using appropriate software programs and moved to a pilot-scale (production of 100 kg of IBMC). It has been demonstrated that a highly efficient IBMC production process able to be up-scaled under suitable market conditions has been obtained. Operational conditions involved the production of IBMC involve soft temperature and energy needs, no additional solvents, and high operational efficiency. All of them are according to green manufacturing rules.

Keywords: biomass, catalyst, isosorbide bis-methyl carbonate, polycarbonate, polyurethane, transesterification

Procedia PDF Downloads 132
509 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building

Authors: G. Wimmers

Abstract:

The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.

Keywords: air changes, airtightness, envelope design, industrial building, passive house

Procedia PDF Downloads 148
508 Functional Analysis of Variants Implicated in Hearing Loss in a Cohort from Argentina: From Molecular Diagnosis to Pre-Clinical Research

Authors: Paula I. Buonfiglio, Carlos David Bruque, Lucia Salatino, Vanesa Lotersztein, Sebastián Menazzi, Paola Plazas, Ana Belén Elgoyhen, Viviana Dalamón

Abstract:

Hearing loss (HL) is the most prevalent sensorineural disorder affecting about 10% of the global population, with more than half due to genetic causes. About 1 in 500-1000 newborns present congenital HL. Most of the patients are non-syndromic with an autosomal recessive mode of inheritance. To date, more than 100 genes are related to HL. Therefore, the Whole-exome sequencing (WES) technique has become a cost-effective alternative approach for molecular diagnosis. Nevertheless, new challenges arise from the detection of novel variants, in particular missense changes, which can lead to a spectrum of genotype-to-phenotype correlations, which is not always straightforward. In this work, we aimed to identify the genetic causes of HL in isolated and familial cases by designing a multistep approach to analyze target genes related to hearing impairment. Moreover, we performed in silico and in vivo analyses in order to further study the effect of some of the novel variants identified in the hair cell function using the zebrafish model. A total of 650 patients were studied by Sanger Sequencing and Gap-PCR in GJB2 and GJB6 genes, respectively, diagnosing 15.5% of sporadic cases and 36% of familial ones. Overall, 50 different sequence variants were detected. Fifty of the undiagnosed patients with moderate HL were tested for deletions in STRC gene by Multiplex ligation-dependent probe amplification technique (MLPA), leading to 6% of diagnosis. After this initial screening, 50 families were selected to be analyzed by WES, achieving diagnosis in 44% of them. Half of the identified variants were novel. A missense variant in MYO6 gene detected in a family with postlingual HL was selected to be further analyzed. A protein modeling with AlphaFold2 software was performed, proving its pathogenic effect. In order to functionally validate this novel variant, a knockdown phenotype rescue assay in zebrafish was carried out. Injection of wild-type MYO6 mRNA in embryos rescued the phenotype, whereas using the mutant MYO6 mRNA (carrying c.2782C>A variant) had no effect. These results strongly suggest the deleterious effect of this variant on the mobility of stereocilia in zebrafish neuromasts, and hence on the auditory system. In the present work, we demonstrated that our algorithm is suitable for the sequential multigenic approach to HL in our cohort. These results highlight the importance of a combined strategy in order to identify candidate variants as well as the in silico and in vivo studies to analyze and prove their pathogenicity and accomplish a better understanding of the mechanisms underlying the physiopathology of the hearing impairment.

Keywords: diagnosis, genetics, hearing loss, in silico analysis, in vivo analysis, WES, zebrafish

Procedia PDF Downloads 94
507 Advancing Women's Participation in SIDS' Renewable Energy Sector: A Multicriteria Evaluation Framework

Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos

Abstract:

Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.

Keywords: gender, women, spatial analysis, renewable energy, access

Procedia PDF Downloads 69
506 A Multicriteria Evaluation Framework for Enhancing Women's Participation in SIDS Renewable Energy Sector

Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos

Abstract:

Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.

Keywords: gender, women, spatial analysis, renewable energy, access

Procedia PDF Downloads 83
505 Minding the Gap: Consumer Contracts in the Age of Online Information Flow

Authors: Samuel I. Becher, Tal Z. Zarsky

Abstract:

The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.

Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior

Procedia PDF Downloads 198
504 Carbon Nanofibers as the Favorite Conducting Additive for Mn₃O₄ Catalysts for Oxygen Reactions in Rechargeable Zinc-Air Battery

Authors: Augustus K. Lebechi, Kenneth I. Ozoemena

Abstract:

Rechargeable zinc-air batteries (RZABs) have been described as one of the most viable next-generation ‘beyond-the-lithium-ion’ battery technologies with great potential for renewable energy storage. It is safe, with a high specific energy density (1086 Wh/kg), environmentally benign, and low-cost, especially in resource-limited African countries. For widespread commercialization, the sluggish oxygen reaction kinetics pose a major challenge that impedes the reversibility of the system. Hence, there is a need for low-cost and highly active bifunctional electrocatalysts. Manganese oxide catalysts on carbon conducting additives remain the best couple for the realization of such low-cost RZABs. In this work, hausmannite Mn₃O₄ nanoparticles were synthesized through the annealing method from commercial electrolytic manganese dioxide (EMD), multi-walled carbon nanotubes (MWCNTs) were synthesized via the chemical vapor deposition (CVD) method and carbon nanofibers (CNFs) were synthesized via the electrospinning process with subsequent carbonization. Both Mn₃O₄ catalysts and the carbon conducting additives (MWCNT and CNF) were thoroughly characterized using X-ray powder diffraction spectroscopy (XRD), scanning electron microscopy (SEM), thermogravimetry analysis (TGA) and X-ray photoelectron spectroscopy (XPS). Composite electrocatalysts (Mn₃O₄/CNT and Mn₃O₄/CNF) were investigated for oxygen evolution reaction (OER) and oxygen reduction reaction (ORR) in an alkaline medium. Using the established electrocatalytic modalities for evaluating the electrocatalytic performance of materials (including double layer, electrochemical active surface area, roughness factor, specific current density, and catalytic stability), CNFs proved to be the most efficient conducting additive material for the Mn₃O₄ catalyst. From the DFT calculations, the higher performance of the CNFs over the MWCNTs is related to the ability of the CNFs to allow for a more favorable distribution of the d-electrons of the manganese (Mn) and enhanced synergistic effect with Mn₃O₄ for weaker adsorption energies of the oxygen intermediates (O*, OH* and OOH*). In a proof-of-concept, Mn₃O₄/CNF was investigated as the air cathode for rechargeable zinc-air battery (RZAB) in a micro-3D-printed cell configuration. The RZAB showed good performance in terms of open circuit voltage (1.77 V), maximum power density (177.5 mW cm-2), areal-discharge energy and cycling stability comparable to Pt/C (20 wt%) + IrO2. The findings here provide fresh physicochemical perspectives on the future design and utility of CNFs for developing manganese-based RZABs.

Keywords: bifunctional electrocatalyst, oxygen evolution reaction, oxygen reduction reactions, rechargeable zinc-air batteries.

Procedia PDF Downloads 64
503 Prevalence of Antibiotic Resistant Enterococci in Treated Wastewater Effluent in Durban, South Africa and Characterization of Vancomycin and High-Level Gentamicin-Resistant Strains

Authors: S. H. Gasa, L. Singh, B. Pillay, A. O. Olaniran

Abstract:

Wastewater treatment plants (WWTPs) have been implicated as the leading reservoir for antibiotic resistant bacteria (ARB), including Enterococci spp. and antibiotic resistance genes (ARGs), worldwide. Enterococci are a group of clinically significant bacteria that have gained much attention as a result of their antibiotic resistance. They play a significant role as the principal cause of nosocomial infections and dissemination of antimicrobial resistance genes in the environment. The main objective of this study was to ascertain the role of WWTPs in Durban, South Africa as potential reservoirs for antibiotic resistant Enterococci (ARE) and their related ARGs. Furthermore, the antibiogram and resistance gene profile of Enterococci species recovered from treated wastewater effluent and receiving surface water in Durban were also investigated. Using membrane filtration technique, Enterococcus selective agar and selected antibiotics, ARE were enumerated in samples (influent, activated sludge, before chlorination and final effluent) collected from two WWTPs, as well as from upstream and downstream of the receiving surface water. Two hundred Enterococcus isolates recovered from the treated effluent and receiving surface water were identified by biochemical and PCR-based methods, and their antibiotic resistance profiles determined by the Kirby-Bauer disc diffusion assay, while PCR-based assays were used to detect the presence of resistance and virulence genes. High prevalence of ARE was obtained at both WWTPs, with values reaching a maximum of 40%. The influent and activated sludge samples contained the greatest prevalence of ARE with lower values observed in the before and after chlorination samples. Of the 44 vancomycin and high-level gentamicin-resistant isolates, 11 were identified as E. faecium, 18 as E. faecalis, 4 as E. hirae while 11 are classified as “other” Enterococci species. High-level aminoglycoside resistance for gentamicin (39%) and vancomycin (61%) was recorded in species tested. The most commonly detected virulence gene was the gelE (44%), followed by asa1 (40%), while cylA and esp were detected in only 2% of the isolates. The most prevalent aminoglycoside resistance genes were aac(6')-Ie-aph(2''), aph(3')-IIIa, and ant(6')-Ia detected in 43%, 45% and 41% of the isolates, respectively. Positive correlation was observed between resistant phenotypes to high levels of aminoglycosides and presence of all aminoglycoside resistance genes. Resistance genes for glycopeptide: vanB (37%) and vanC-1 (25%), and macrolide: ermB (11%) and ermC (54%) were detected in the isolates. These results show the need for more efficient wastewater treatment and disposal in order to prevent the release of virulent and antibiotic resistant Enterococci species and safeguard public health.

Keywords: antibiogram, enterococci, gentamicin, vancomycin, virulence signatures

Procedia PDF Downloads 219
502 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces

Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur

Abstract:

In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.

Keywords: aerodynamic, bi-dimensional, vegetation, synergistic

Procedia PDF Downloads 269
501 Suggestion of Methodology to Detect Building Damage Level Collectively with Flood Depth Utilizing Geographic Information System at Flood Disaster in Japan

Authors: Munenari Inoguchi, Keiko Tamura

Abstract:

In Japan, we were suffered by earthquake, typhoon, and flood disaster in 2019. Especially, 38 of 47 prefectures were affected by typhoon #1919 occurred in October 2019. By this disaster, 99 people were dead, three people were missing, and 484 people were injured as human damage. Furthermore, 3,081 buildings were totally collapsed, 24,998 buildings were half-collapsed. Once disaster occurs, local responders have to inspect damage level of each building by themselves in order to certificate building damage for survivors for starting their life reconstruction process. At that disaster, the total number to be inspected was so high. Based on this situation, Cabinet Office of Japan approved the way to detect building damage level efficiently, that is collectively detection. However, they proposed a just guideline, and local responders had to establish the concrete and infallible method by themselves. Against this issue, we decided to establish the effective and efficient methodology to detect building damage level collectively with flood depth. Besides, we thought that the flood depth was relied on the land height, and we decided to utilize GIS (Geographic Information System) for analyzing the elevation spatially. We focused on the analyzing tool of spatial interpolation, which is utilized to survey the ground water level usually. In establishing the methodology, we considered 4 key-points: 1) how to satisfy the condition defined in the guideline approved by Cabinet Office for detecting building damage level, 2) how to satisfy survivors for the result of building damage level, 3) how to keep equitability and fairness because the detection of building damage level was executed by public institution, 4) how to reduce cost of time and human-resource because they do not have enough time and human-resource for disaster response. Then, we proposed a methodology for detecting building damage level collectively with flood depth utilizing GIS with five steps. First is to obtain the boundary of flooded area. Second is to collect the actual flood depth as sampling over flooded area. Third is to execute spatial analysis of interpolation with sampled flood depth to detect two-dimensional flood depth extent. Fourth is to divide to blocks by four categories of flood depth (non-flooded, over the floor to 100 cm, 100 cm to 180 cm and over 180 cm) following lines of roads for getting satisfaction from survivors. Fifth is to put flood depth level to each building. In Koriyama city of Fukushima prefecture, we proposed the methodology of collectively detection for building damage level as described above, and local responders decided to adopt our methodology at typhoon #1919 in 2019. Then, we and local responders detect building damage level collectively to over 1,000 buildings. We have received good feedback that the methodology was so simple, and it reduced cost of time and human-resources.

Keywords: building damage inspection, flood, geographic information system, spatial interpolation

Procedia PDF Downloads 124
500 Lateral Retroperitoneal Transpsoas Approach: A Practical Minimal Invasive Surgery Option for Treating Pyogenic Spondylitis of the Lumbar Vertebra

Authors: Sundaresan Soundararajan, Chor Ngee Tan

Abstract:

Introduction: Pyogenic spondylitis, otherwise treated conservatively with long term antibiotics, would require surgical debridement and reconstruction in about 10% to 20% of cases. The classical approach adopted many surgeons have always been anterior approach in ensuring thorough and complete debridement. This, however, comes with high rates of morbidity due to the nature of its access. Direct lateral retroperitoneal approach, which has been growing in usage in degenerative lumbar diseases, has the potential in treating pyogenic spondylitis with its ease of approach and relatively low risk of complications. Aims/Objectives: The objective of this study was to evaluate the effectiveness and clinical outcome of using lateral approach surgery in the surgical management of pyogenic spondylitis of the lumbar spine. Methods: Retrospective chart analysis was done on all patients who presented with pyogenic spondylitis (lumbar discitis/vertebral osteomyelitis) and had undergone direct lateral retroperitoneal lumbar vertebral debridement and posterior instrumentation between 2014 and 2016. Data on blood loss, surgical operating time, surgical complications, clinical outcomes and fusion rates were recorded. Results: A total of 6 patients (3 male and 3 female) underwent this procedure at a single institution by a single surgeon during the defined period. One patient presented with infected implant (PLIF) and vertebral osteomyelitis while the other five presented with single level spondylodiscitis. All patients underwent lumbar debridement, iliac strut grafting and posterior instrumentation (revision of screws for infected PLIF case). The mean operating time was 308.3 mins for all 6 cases. Mean blood loss was reported at 341cc (range from 200cc to 600cc). Presenting symptom of back pain resolved in all 6 cases while 2 cases that presented with lower limb weakness had improvement of neurological deficits. One patient had dislodged strut graft while performing posterior instrumentation and needed graft revision intraoperatively. Infective markers normalized for all patients subsequently. All subjects also showed radiological evidence of fusion on 6 months follow up. Conclusions: Lateral approach in treating pyogenic spondylitis is a viable option as it allows debridement and reconstruction without the risk that comes with other anterior approaches. It allows efficient debridement, short surgical time, moderate blood loss and low risk of vascular injuries. Clinical outcomes and fusion rates by this approach also support its use as practical MIS option surgery for such infection cases.

Keywords: lateral approach, minimally invasive, pyogenic spondylitis, XLIF

Procedia PDF Downloads 177
499 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 100
498 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context

Authors: Andrea Fiorista

Abstract:

The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.

Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL

Procedia PDF Downloads 87
497 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 134
496 A Lightweight Interlock Block from Foamed Concrete with Construction and Agriculture Waste in Malaysia

Authors: Nor Azian Binti Aziz, Muhammad Afiq Bin Tambichik, Zamri Bin Hashim

Abstract:

The rapid development of the construction industry has contributed to increased construction waste, with concrete waste being among the most abundant. This waste is generated from ready-mix batching plants after the concrete cube testing process is completed and disposed of in landfills, leading to increased solid waste management costs. This study aims to evaluate the engineering characteristics of foamed concrete with waste mixtures construction and agricultural waste to determine the usability of recycled materials in the construction of non-load-bearing walls. This study involves the collection of construction wastes, such as recycled aggregates (RCA) obtained from the remains of finished concrete cubes, which are then tested in the laboratory. Additionally, agricultural waste, such as rice husk ash, is mixed into foamed concrete interlock blocks to enhance their strength. The optimal density of foamed concrete for this study was determined by mixing mortar and foam-backed agents to achieve the minimum targeted compressive strength required for non-load-bearing walls. The tests conducted in this study involved two phases. In Phase 1, elemental analysis using an X-ray fluorescence spectrometer (XRF) was conducted on the materials used in the production of interlock blocks such as sand, recycled aggregate/recycled concrete aggregate (RCA), and husk ash paddy/rice husk ash (RHA), Phase 2 involved physical and thermal tests, such as compressive strength test, heat conductivity test, and fire resistance test, on foamed concrete mixtures. The results showed that foamed concrete can produce lightweight interlock blocks. X-ray fluorescence spectrometry plays a crucial role in the characterization, quality control, and optimization of foamed concrete mixes containing construction and agriculture waste. The unique composition mixer of foamed concrete and the resulting chemical and physical properties, as well as the nature of replacement (either as cement or fine aggregate replacement), the waste contributes differently to the performance of foamed concrete. Interlocking blocks made from foamed concrete can be advantageous due to their reduced weight, which makes them easier to handle and transport compared to traditional concrete blocks. Additionally, foamed concrete typically offers good thermal and acoustic insulation properties, making it suitable for a variety of building projects. Using foamed concrete to produce lightweight interlock blocks could contribute to more efficient and sustainable construction practices. Additionally, RCA derived from concrete cube waste can serve as a substitute for sand in producing lightweight interlock blocks.

Keywords: construction waste, recycled aggregates (RCA), sustainable concrete, structure material

Procedia PDF Downloads 54
495 Determinants of Profit Efficiency among Poultry Egg Farmers in Ondo State, Nigeria: A Stochastic Profit Function Approach

Authors: Olufunke Olufunmilayo Ilemobayo, Barakat. O Abdulazeez

Abstract:

Profit making among poultry egg farmers has been a challenge to efficient distribution of scarce farm resources over the years, due majorly to low capital base, inefficient management, technical inefficiency, economic inefficiency, thus poultry egg production has moved into an underperformed situation, characterised by low profit margin. Though previous studies focus mainly on broiler production and efficiency of its production, however, paucity of information exist in the areas of profit efficiency in the study area. Hence, determinants of profit efficiency among poultry egg farmers in Ondo State, Nigeria were investigated. A purposive sampling technique was used to obtain primary data from poultry egg farmers in Owo and Akure local government area of Ondo State, through a well-structured questionnaire. socio-economic characteristics such as age, gender, educational level, marital status, household size, access to credit, extension contact, other variables were input and output data like flock size, cost of feeder and drinker, cost of feed, cost of labour, cost of drugs and medications, cost of energy, price of crate of table egg, price of spent layers were variables used in the study. Data were analysed using descriptive statistics, budgeting analysis, and stochastic profit function/inefficiency model. Result of the descriptive statistics shows that 52 per cent of the poultry farmers were between 31-40 years, 62 per cent were male, 90 per cent had tertiary education, 66 per cent were primarily poultry farmers, 78 per cent were original poultry farm owners and 55 per cent had more than 5 years’ work experience. Descriptive statistics on cost and returns indicated that 64 per cent of the return were from sales of egg, while the remaining 36 per cent was from sales of spent layers. The cost of feeding take the highest proportion of 69 per cent of cost of production and cost of medication the lowest (7 per cent). A positive gross margin of N5, 518,869.76, net farm income of ₦ 5, 500.446.82 and net return on investment of 0.28 indicated poultry egg production is profitable. Equipment’s cost (22.757), feeding cost (18.3437), labour cost (136.698), flock size (16.209), drug and medication cost (4.509) were factors that affecting profit efficiency, while education (-2.3143), household size (-18.4291), access to credit (-16.027), and experience (-7.277) were determinant of profit efficiency. Education, household size, access to credit and experience in poultry production were the main determinants of profit efficiency of poultry egg production in Ondo State. Other factors that affect profit efficiency were cost of feeding, cost of labour, flock size, cost of drug and medication, they positively and significantly influenced profit efficiency in Ondo State, Nigeria.

Keywords: cost and returns, economic inefficiency, profit margin, technical inefficiency

Procedia PDF Downloads 129
494 Nitrate Photoremoval in Water Using Nanocatalysts Based on Ag / Pt over TiO2

Authors: Ana M. Antolín, Sandra Contreras, Francesc Medina, Didier Tichit

Abstract:

Introduction: High levels of nitrates (> 50 ppm NO3-) in drinking water are potentially risky to human health. In the recent years, the trend of nitrate concentration in groundwater is rising in the EU and other countries. Conventional catalytic nitrate reduction processes into N2 and H2O lead to some toxic intermediates and by-products, such as NO2-, NH4+, and NOx gases. Alternatively, photocatalytic nitrate removal using solar irradiation and heterogeneous catalysts is a very promising and ecofriendly technique. It has been scarcely performed and more research on highly efficient catalysts is still needed. In this work, different nanocatalysts supported on Aeroxide Titania P25 (P25) have been prepared varying: 0.5-4 % wt. Ag); Pt (2, 4 % wt.); Pt precursor (H2PtCl6/K2PtCl6); and impregnation order of both metals. Pt was chosen in order to increase the selectivity to N2 and decrease that to NO2-. Catalysts were characterized by nitrogen physisorption, X-Ray diffraction, UV-visible spectroscopy, TEM and X Ray-Photoelectron Spectroscopy. The aim was to determine the influence of the composition and the preparation method of the catalysts on the conversion and selectivity in the nitrate reduction, as well as going through an overall and better understanding of the process. Nanocatalysts synthesis: For the mono and bimetallic catalysts preparation, wise-drop wetness impregnation of the precursors (AgNO3, H2PtCl6, K2PtCl6) followed by a reduction step (NaBH4) was used to obtain the metal colloids. Results and conclusions: Denitration experiments were performed in a 350 mL PTFE batch reactor under inert standard operational conditions, ultraviolet irradiations (λ=254 nm (UV-C); λ=365 nm (UV-A)), and presence/absence of hydrogen gas as a reducing agent, contrary to most studies using oxalic or formic acid. Samples were analyzed by Ionic Chromatography. Blank experiments using respectively P25 (dark conditions), hydrogen only and UV irradiations without hydrogen demonstrated a clear influence of the presence of hydrogen on nitrate reduction. Also, they demonstrated that UV irradiation increased the selectivity to N2. Interestingly, the best activity was obtained under ultraviolet lamps, especially at a closer wavelength to visible light irradiation (λ = 365 nm) and H2. 2% Ag/P25 leaded to the highest NO3- conversion among the monometallic catalysts. However, nitrite quantities have to be diminished. On the other hand, practically no nitrate conversion was observed with the monometallics based on Pt/P25. Therefore, the amount of 2% Ag was chosen for the bimetallic catalysts. Regarding the bimetallic catalysts, it is observed that the metal impregnation order, amount and Pt precursor highly affects the results. Higher selectivity to the desirable N2 gas is obtained when Pt was firstly added, especially with K2PtCl6 as Pt precursor. This suggests that when Pt is secondly added, it covers the Ag particles, which are the most active in this reaction. It could be concluded that Ag allows the nitrate reduction step to nitrite, and Pt the nitrite reduction step toward the desirable N2 gas.

Keywords: heterogeneous catalysis, hydrogenation, nanocatalyst, nitrate removal, photocatalysis

Procedia PDF Downloads 272
493 The Background of Ornamental Design Practice: Theory and Practice Based Research on Ornamental Traditions

Authors: Jenna Pyorala

Abstract:

This research looks at the principles and purposes ornamental design has served in the field of textile design. Ornamental designs are characterized by richness of details, abundance of elements, vegetative motifs and organic forms that flow harmoniously in complex compositions. Research on ornamental design is significant, because ornaments have been overlooked and considered as less meaningful and aesthetically pleasing than minimalistic, modern designs. This is despite the fact that in many parts of the world ornaments have been an important part of the cultural identification and expression for centuries. Ornament has been claimed to be superficial and merely used as a decorative way to hide the faults of designs. Such generalization is an incorrect interpretation of the real purposes of ornament. Many ornamental patterns tell stories, present mythological scenes or convey symbolistic meanings. Historically, ornamental decorations have been representing ideas and characteristics such as abundance, wealth, power and personal magnificence. The production of fine ornaments required refined skill, eye for intricate detail and perseverance while compiling complex elements into harmonious compositions. For this reason, ornaments have played an important role in the advancement of craftsmanship. Even though it has been claimed that people in the western design world have lost the relationship to ornament, the relation to it has merely changed from the practice of a craftsman to conceptualisation of a designer. With the help of new technological tools the production of ornaments has become faster and more efficient, demanding less manual labour. Designers who commit to this style of organic forms and vegetative motifs embrace and respect nature by representing its organically growing forms and by following its principles. The complexity of the designs is used as a way to evoke a sense of extraordinary beauty and stimulate intellect by freeing the mind from the predetermined interpretations. Through the study of these purposes it can be demonstrated that complex and richer design styles are as valuable a part of the world of design as more modern design approaches. The study highlights the meaning of ornaments by presenting visual examples and literature research findings. The practice based part of the project is the visual analysis of historical and cultural ornamental traditions such as Indian Chikan embroidery, Persian carpets, Art Nouveau and Rococo according to the rubric created for the purpose. The next step is the creation of ornamental designs based on the key elements in different styles. Theoretical and practical parts are woven together in this study that respects respect the long traditions of ornaments and highlight the importance of these design approaches to the field, in contrast to the more commonly preferred styles.

Keywords: cultural design traditions, ornamental design, organic forms from nature, textile design

Procedia PDF Downloads 226
492 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 268
491 Evaluation of Yield and Yield Components of Malaysian Palm Oil Board-Senegal Oil Palm Germplasm Using Multivariate Tools

Authors: Khin Aye Myint, Mohd Rafii Yusop, Mohd Yusoff Abd Samad, Shairul Izan Ramlee, Mohd Din Amiruddin, Zulkifli Yaakub

Abstract:

The narrow base of genetic is the main obstacle of breeding and genetic improvement in oil palm industry. In order to broaden the genetic bases, the Malaysian Palm Oil Board has been extensively collected wild germplasm from its original area of 11 African countries which are Nigeria, Senegal, Gambia, Guinea, Sierra Leone, Ghana, Cameroon, Zaire, Angola, Madagascar, and Tanzania. The germplasm collections were established and maintained as a field gene bank in Malaysian Palm Oil Board (MPOB) Research Station in Kluang, Johor, Malaysia to conserve a wide range of oil palm genetic resources for genetic improvement of Malaysian oil palm industry. Therefore, assessing the performance and genetic diversity of the wild materials is very important for understanding the genetic structure of natural oil palm population and to explore genetic resources. Principal component analysis (PCA) and Cluster analysis are very efficient multivariate tools in the evaluation of genetic variation of germplasm and have been applied in many crops. In this study, eight populations of MPOB-Senegal oil palm germplasm were studied to explore the genetic variation pattern using PCA and cluster analysis. A total of 20 yield and yield component traits were used to analyze PCA and Ward’s clustering using SAS 9.4 version software. The first four principal components which have eigenvalue >1 accounted for 93% of total variation with the value of 44%, 19%, 18% and 12% respectively for each principal component. PC1 showed highest positive correlation with fresh fruit bunch (0.315), bunch number (0.321), oil yield (0.317), kernel yield (0.326), total economic product (0.324), and total oil (0.324) while PC 2 has the largest positive association with oil to wet mesocarp (0.397) and oil to fruit (0.458). The oil palm population were grouped into four distinct clusters based on 20 evaluated traits, this imply that high genetic variation existed in among the germplasm. Cluster 1 contains two populations which are SEN 12 and SEN 10, while cluster 2 has only one population of SEN 3. Cluster 3 consists of three populations which are SEN 4, SEN 6, and SEN 7 while SEN 2 and SEN 5 were grouped in cluster 4. Cluster 4 showed the highest mean value of fresh fruit bunch, bunch number, oil yield, kernel yield, total economic product, and total oil and Cluster 1 was characterized by high oil to wet mesocarp, and oil to fruit. The desired traits that have the largest positive correlation on extracted PCs could be utilized for the improvement of oil palm breeding program. The populations from different clusters with the highest cluster means could be used for hybridization. The information from this study can be utilized for effective conservation and selection of the MPOB-Senegal oil palm germplasm for the future breeding program.

Keywords: cluster analysis, genetic variability, germplasm, oil palm, principal component analysis

Procedia PDF Downloads 164
490 Combat Plastic Entering in Kanpur City, Uttar Pradesh, India Marine Environment

Authors: Arvind Kumar

Abstract:

The city of Kanpur is located in the terrestrial plain area on the bank of the river Ganges and is the second largest city in the state of Uttar Pradesh. The city generates approximately 1400-1600 tons per day of MSW. Kanpur has been known as a major point and non-points-based pollution hotspot for the river Ganges. The city has a major industrial hub, probably the largest in the state, catering to the manufacturing and recycling of plastic and other dry waste streams. There are 4 to 5 major drains flowing across the city, which receive a significant quantity of waste leakage, which subsequently adds to the Ganges flow and is carried to the Bay of Bengal. A river-to-sea flow approach has been established to account for leaked waste into urban drains, leading to the build-up of marine litter. Throughout its journey, the river accumulates plastic – macro, meso, and micro, from various sources and transports it towards the sea. The Ganges network forms the second-largest plastic-polluting catchment in the world, with over 0.12 million tonnes of plastic discharged into marine ecosystems per year and is among 14 continental rivers into which over a quarter of global waste is discarded 3.150 Kilo tons of plastic waste is generated in Kanpur, out of which 10%-13% of plastic is leaked into the local drains and water flow systems. With the Support of Kanpur Municipal Corporation, 1TPD capacity MRF for drain waste management was established at Krishna Nagar, Kanpur & A German startup- Plastic Fisher, was identified for providing a solution to capture the drain waste and achieve its recycling in a sustainable manner with a circular economy approach. The team at Plastic Fisher conducted joint surveys and identified locations on 3 drains at Kanpur using GIS maps developed during the survey. It suggested putting floating 'Boom Barriers' across the drains with a low-cost material, which reduced their cost to only 2000 INR per barrier. The project was built upon the self-sustaining financial model. The project includes activities where a cost-efficient model is developed and adopted for a socially self-inclusive model. The project has recommended the use of low-cost floating boom barriers for capturing waste from drains. This involves a one-time time cost and has no operational cost. Manpower is engaged in fishing and capturing immobilized waste, whose salaries are paid by the Plastic Fisher. The captured material is sun-dried and transported to the designated place, where the shed and power connection, which act as MRF, are provided by the city Municipal corporation. Material aggregation, baling, and transportation costs to end-users are borne by Plastic Fisher as well.

Keywords: Kanpur, marine environment, drain waste management, plastic fisher

Procedia PDF Downloads 71