Search results for: operational feasibility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2480

Search results for: operational feasibility

470 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives

Authors: Chen Guo, Heng Tang, Ben Niu

Abstract:

Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.

Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives

Procedia PDF Downloads 139
469 Sweden’s SARS-CoV-2 Mitigation Failure as a Science and Solutions Principle Case Study

Authors: Dany I. Doughan, Nizam S. Najd

Abstract:

Different governments in today’s global pandemic are approaching the challenging and complex issue of mitigating the spread of the SARS-CoV-2 virus differently while simultaneously considering their national economic and operational bottom lines. One of the most notable successes has been Taiwan's multifaceted virus containment approach, which resulted in a substantially lower incidence rate compared to Sweden’s chief mitigation tactic of herd immunity. From a classic Swiss Cheese Model perspective, integrating more fail-safe layers of defense against the virus in Taiwan’s approach compared to Sweden’s meant that in Taiwan, the government did not have to resort to extreme measures like the national lockdown Sweden is currently contemplating. From an optimized virus spread mitigation solution development standpoint using the Solutions Principle, the Taiwanese and Swedish solutions were desirable economically by businesses that remained open and non-economically or socially by individuals who enjoyed fewer disruptions from what they considered normal before the pandemic. Out of the two, the Taiwanese approach was more feasible long-term from a workforce management and quality control perspective for healthcare facilities and their professionals who were able to provide better, longer, more attentive care to the fewer new positive COVID-19 cases. Furthermore, the Taiwanese approach was more applicable as an overall model to emulate thanks in part to its short-term and long-term multilayered approach, which allows for the kind of flexibility needed by other governments to fully or partially adapt or adopt said, model. The Swedish approach, on the other hand, ignored the biochemical nature of the virus and relied heavily on short-term personal behavioral adjustments and conduct modifications, which are not as reliable as establishing required societal norms and awareness programs. The available international data on COVID-19 cases and the published governmental approaches to control the spread of the coronavirus support a better fit into the Solutions Principle of Taiwan’s Swiss Cheese Model success story compared to Sweden’s.

Keywords: coronavirus containment and mitigation, solutions principle, Swiss Cheese Model, viral mutation

Procedia PDF Downloads 135
468 Charcoal Production from Invasive Species: Suggested Shift for Increased Household Income and Forest Plant Diversity in Nepal

Authors: Kishor Prasad Bhatta, Suman Ghimire, Durga Prasad Joshi

Abstract:

Invasive Alien Species (IAS) are considered waste forest resources in Nepal. The rapid expansion of IAS is one of the nine main drivers of forest degradation, though the extent and distribution of this species are not well known. Further, the knowledge of the impact of IAS removal on forest plant diversity is hardly known, and the possibilities of income generation from them at the grass-root communities are rarely documented. Systematic sampling of 1% with nested circular plots of 500 square meters was performed in IAS removed and non-removed area, each of 30 hectares in Udayapur Community Forest User Group (CFUG), Chitwan, central Nepal to observe whether the removal of IAS contributed to an increase in plant diversity. In addition, ten entrepreneurs of Udaypur CFUG, involved in the charcoal production, briquette making and marketing were interviewed and interacted as well as their record keeping booklets were reviewed to understand if the charcoal production contributed to their income and employment. The average annual precipitation and temperature of the study area is 2100 mm and 34 degree Celsius respectively with Shorea robusta as main tree species and Eupatorium odoratum as dominant IAS. All the interviewed households were from the ̔below-poverty-line’ category as per Community Forestry Guidelines. A higher Shannon-Weiner plant diversity index at regeneration level was observed in IAS removed areas (2.43) than in control site (1.95). Furthermore, the number of tree seedlings and saplings in the IAS harvested blocks were significantly higher (p < 0.005) compared to the unharvested one. The sale of charcoal produced through the pyrolysis of IAS in ̔ Bio-energy kilns’ contributed for an average increased income of 30.95 % (Nepalese rupees 31,000) of the involved households. Despite above factors, some operational policy hurdles related to charcoal transport and taxation existed at field level. This study suggests that plant diversity could be increased through the removal of IAS, and considerable economic benefits could be achieved if charcoal is substantially produced and utilized.

Keywords: briquette, economic benefits, pyrolysis, regeneration

Procedia PDF Downloads 278
467 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs

Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili

Abstract:

OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.

Keywords: LWD measurements, caliper log, correlations, analysis

Procedia PDF Downloads 121
466 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach

Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi

Abstract:

Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.

Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems

Procedia PDF Downloads 292
465 The Role of Strategic Metals in Cr-Al-Pt-V Composition of Protective Bond Coats

Authors: A. M. Pashayev, A. S. Samedov, T. B. Usubaliyev, N. Sh. Yusifov

Abstract:

Different types of coating technologies are widely used for gas turbine blades. Thermal barrier coatings, consisting of ceramic top coat, thermally grown oxide and a metallic bond coat are used in applications for thermal protection of hot section components in gas turbine engines. Operational characteristics and longevity of high-temperature turbine blades substantially depend on a right choice of composition of the protective thermal barrier coatings. At a choice of composition of a coating and content of the basic elements it is necessary to consider following factors, as minimum distinctions of coefficients of thermal expansions of elements, level of working temperatures and composition of the oxidizing environment, defining the conditions for the formation of protective layers, intensity of diffusive processes and degradation speed of protective properties of elements, extent of influence on the fatigue durability of details during operation, using of elements with high characteristics of thermal stability and satisfactory resilience of gas corrosion, density, hardness, thermal conduction and other physical characteristics. Forecasting and a choice of a thermal barrier coating composition, all above factors at the same time cannot be considered, as some of these characteristics are defined by experimental studies. The implemented studies and investigations show that one of the main failures of coatings used on gas turbine blades is related to not fully taking the physical-chemical features of elements into consideration during the determination of the composition of alloys. It leads to the formation of more difficult spatial structure, composition which also changes chaotically in some interval of concentration that doesn't promote thermal and structural firmness of a coating. For the purpose of increasing the thermal and structural resistant of gas turbine blade coatings is offered a new approach to forecasting of composition on the basis of analysis of physical-chemical characteristics of alloys taking into account the size factor, electron configuration, type of crystal lattices and Darken-Gurry method. As a result, of calculations and experimental investigations is offered the new four-component metallic bond coat on the basis of chrome for the gas turbine blades.

Keywords: gas turbine blades, thermal barrier coating, metallic bond coat, strategic metals, physical-chemical features

Procedia PDF Downloads 315
464 An Effective Preventive Program of HIV/AIDS among Hill Tribe Youth, Thailand

Authors: Tawatchai Apidechkul

Abstract:

This operational research was conducted and divided into two phases: the first phase aimed to determine the risk behaviors used a cross-sectional study design, following by the community participatory research design to develop the HIV/AIDS preventive model among the Akha youths. The instruments were composed of completed questionnaires and assessment forms that were tested for validity and reliability before use. Study setting was Jor Pa Ka and Saen Suk Akha villages, Mae Chan District, Chiang Rai, Thailand. Study sample were the Akha youths lived in the villages. Means and chi-square test were used for the statistical testing. Results: Akha youths in the population mobilization villages live in agricultural families with low income and circumstance of narcotic drugs. The average age was 16 (50.00%), 51.52% Christian, 48.80% completed secondary school, 43.94% had annual family income of 30,000-40,000 baht. Among males, 54.54% drank, 39.39% smoked, 7.57% used amphetamine, first sexual intercourse reported at 14 years old, 50.00% had 2-5 partners, 62.50% had unprotected sex (no-condom). Reasons of unprotected sex included not being able to find condom, unawareness of need to use condoms, and dislike. 28.79% never been received STI related information, 6.06% had STI. Among females, 15.15% drank, 28.79% had sexual intercourse and had first sexual intercourse less than 15 year old. 40.00% unprotected sex (no-condom), 10.61% never been received STI related information, and 4.54% had STI. The HIV/AIDS preventive model contained two components. Peer groups among the youths were built around interests in sports. Improving knowledge would empower their capability and lead to choices that would result in HIV/AIDS prevention. The empowering model consisted of 4 courses: a. human reproductive system and its hygiene, b. risk-avoid skills, family planning, and counseling techniques, c. HIV/AIDS and other STIs, d. drugs and related laws and regulations. The results of the activities found that youths had a greater of knowledge and attitude levels for HIV/AIDS prevention with statistical significance (χ2-τεστ= 12.87, p-value= 0.032 and χ2-τεστ= 9.31, p-value<0.001 respectively). A continuous and initiative youths capability development program is the appropriate process to reduce the spread of HIV/AIDS in youths, particularly in the population who have the specific of language and culture.

Keywords: AIV/AIDS, preventive program, effective, hill tribe

Procedia PDF Downloads 370
463 Library Outreach After COVID: Making the Case for In-Person Library Visits

Authors: Lucas Berrini

Abstract:

Academic libraries have always struggled with engaging with students and faculty. Striking the balance between what the community needs and what the library can afford has also been a point of contention for libraries. As academia begins to return to a new normal after COVID, library staff are rethinking how remind patrons that the library is open and ready for business. NC Wesleyan, a small liberal arts school in eastern North Carolina, decided to be proactive and reach out to the academic community. After shutting down in 2020 for COVID, the campus library saw a marked decrease in in-person attendance. For a small school whose operational budget was tied directly to tuition payments, it was imperative for the library to remind faculty and staff that they were open for business. At the beginning of the Summer 2022 term and continuing into the fall, the reference team created a marketing plan using email, physical meetings, and virtual events targeted at students and faculty as well as community members who utilized the facilities prior to COVID. The email blasts were gentle reminders that the building was open and available for use The target audiences were the community at large. Several of the emails contained reminders of previous events in the library that were student centered. The next phase of the email campaign centers on reminding the community about the libraries physical and electronic resources, including the makerspace lab. Language will indicate that student voices are needed, and a QR code is included for students to leave feedback as to what they want to see in the library. The final phase of the email blasts were faculty focused and invited them to connect with library reference staff for an in-person consultation on their research needs. While this phase is ongoing, the response has been positive, and staff are compiling data in hopes of working with administration to implement some of the requested services and materials. These email blasts will be followed up by in-person meetings with faculty and students who responded to the QR codes. This research is ongoing. This type of targeted outreach is new for Wesleyan. It is the hope of the library that by the end of Fall 2022, there will be a plan in place to address the needs and concerns of the students and faculty. Furthermore, the staff hopes to create a new sense of community for the students and staff of the university.

Keywords: academic, education, libraries, outreach

Procedia PDF Downloads 94
462 Glycerol-Based Bio-Solvents for Organic Synthesis

Authors: Dorith Tavor, Adi Wolfson

Abstract:

In the past two decades a variety of green solvents have been proposed, including water, ionic liquids, fluorous solvents, and supercritical fluids. However, their implementation in industrial processes is still limited due to their tedious and non-sustainable synthesis, lack of experimental data and familiarity, as well as operational restrictions and high cost. Several years ago we presented, for the first time, the use of glycerol-based solvents as alternative sustainable reaction mediums in both catalytic and non-catalytic organic synthesis. Glycerol is the main by-product from the conversion of oils and fats in oleochemical production. Moreover, in the past decade, its price has substantially decreased due to an increase in supply from the production and use of fatty acid derivatives in the food, cosmetics, and drugs industries and in biofuel synthesis, i.e., biodiesel. The renewable origin, beneficial physicochemical properties and reusability of glycerol-based solvents, enabled improved product yield and selectivity as well as easy product separation and catalyst recycling. Furthermore, their high boiling point and polarity make them perfect candidates for non-conventional heating and mixing techniques such as ultrasound- and microwave-assisted reactions. Finally, in some reactions, such as catalytic transfer-hydrogenation or transesterification, they can also be used simultaneously as both solvent and reactant. In our ongoing efforts to design a viable protocol that will facilitate the acceptance of glycerol and its derivatives as sustainable solvents, pure glycerol and glycerol triacetate (triacetin) as well as various glycerol-triacetin mixtures were tested as sustainable solvents in several representative organic reactions, such as nucleophilic substitution of benzyl chloride to benzyl acetate, Suzuki-Miyaura cross-coupling of iodobenzene and phenylboronic acid, baker’s yeast reduction of ketones, and transfer hydrogenation of olefins. It was found that reaction performance was affected by the glycerol to triacetin ratio, as the solubility of the substrates in the solvent determined product yield. Thereby, employing optimal glycerol to triacetin ratio resulted in maximum product yield. In addition, using glycerol-based solvents enabled easy and successful separation of the products and recycling of the catalysts.

Keywords: glycerol, green chemistry, sustainability, catalysis

Procedia PDF Downloads 624
461 Poultry in Motion: Text Mining Social Media Data for Avian Influenza Surveillance in the UK

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Background: Avian influenza, more commonly known as Bird flu, is a viral zoonotic respiratory disease stemming from various species of poultry, including pets and migratory birds. Researchers have purported that the accessibility of health information online, in addition to the low-cost data collection methods the internet provides, has revolutionized the methods in which epidemiological and disease surveillance data is utilized. This paper examines the feasibility of using internet data sources, such as Twitter and livestock forums, for the early detection of the avian flu outbreak, through the use of text mining algorithms and social network analysis. Methods: Social media mining was conducted on Twitter between the period of 01/01/2021 to 31/12/2021 via the Twitter API in Python. The results were filtered firstly by hashtags (#avianflu, #birdflu), word occurrences (avian flu, bird flu, H5N1), and then refined further by location to include only those results from within the UK. Analysis was conducted on this text in a time-series manner to determine keyword frequencies and topic modeling to uncover insights in the text prior to a confirmed outbreak. Further analysis was performed by examining clinical signs (e.g., swollen head, blue comb, dullness) within the time series prior to the confirmed avian flu outbreak by the Animal and Plant Health Agency (APHA). Results: The increased search results in Google and avian flu-related tweets showed a correlation in time with the confirmed cases. Topic modeling uncovered clusters of word occurrences relating to livestock biosecurity, disposal of dead birds, and prevention measures. Conclusions: Text mining social media data can prove to be useful in relation to analysing discussed topics for epidemiological surveillance purposes, especially given the lack of applied research in the veterinary domain. The small sample size of tweets for certain weekly time periods makes it difficult to provide statistically plausible results, in addition to a great amount of textual noise in the data.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, avian influenza, social media

Procedia PDF Downloads 105
460 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 341
459 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal

Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader

Abstract:

DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.

Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform

Procedia PDF Downloads 79
458 Synthesis, Characterization and Photocatalytic Applications of Ag-Doped-SnO₂ Nanoparticles by Sol-Gel Method

Authors: M. S. Abd El-Sadek, M. A. Omar, Gharib M. Taha

Abstract:

In recent years, photocatalytic degradation of various kinds of organic and inorganic pollutants using semiconductor powders as photocatalysts has been extensively studied. Owing to its relatively high photocatalytic activity, biological and chemical stability, low cost, nonpoisonous and long stable life, Tin oxide materials have been widely used as catalysts in chemical reactions, including synthesis of vinyl ketone, oxidation of methanol and so on. Tin oxide (SnO₂), with a rutile-type crystalline structure, is an n-type wide band gap (3.6 eV) semiconductor that presents a proper combination of chemical, electronic and optical properties that make it advantageous in several applications. In the present work, SnO₂ nanoparticles were synthesized at room temperature by the sol-gel process and thermohydrolysis of SnCl₂ in isopropanol by controlling the crystallite size through calculations. The synthesized nanoparticles were identified by using XRD analysis, TEM, FT-IR, and Uv-Visible spectroscopic techniques. The crystalline structure and grain size of the synthesized samples were analyzed by X-Ray diffraction analysis (XRD) and the XRD patterns confirmed the presence of tetragonal phase SnO₂. In this study, Methylene blue degradation was tested by using SnO₂ nanoparticles (at different calculations temperatures) as a photocatalyst under sunlight as a source of irradiation. The results showed that the highest percentage of degradation of Methylene blue dye was obtained by using SnO₂ photocatalyst at calculations temperature 800 ᵒC. The operational parameters were investigated to be optimized to the best conditions which result in complete removal of organic pollutants from aqueous solution. It was found that the degradation of dyes depends on several parameters such as irradiation time, initial dye concentration, the dose of the catalyst and the presence of metals such as silver as a dopant and its concentration. Percent degradation was increased with irradiation time. The degradation efficiency decreased as the initial concentration of the dye increased. The degradation efficiency increased as the dose of the catalyst increased to a certain level and by further increasing the SnO₂ photocatalyst dose, the degradation efficiency is decreased. The best degradation efficiency on which obtained from pure SnO₂ compared with SnO₂ which doped by different percentage of Ag.

Keywords: SnO₂ nanoparticles, a sol-gel method, photocatalytic applications, methylene blue, degradation efficiency

Procedia PDF Downloads 152
457 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit

Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira

Abstract:

Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.

Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing

Procedia PDF Downloads 143
456 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 490
455 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 99
454 Analysis and Modeling of Graphene-Based Percolative Strain Sensor

Authors: Heming Yao

Abstract:

Graphene-based percolative strain gauges could find applications in many places such as touch panels, artificial skins or human motion detection because of its advantages over conventional strain gauges such as flexibility and transparency. These strain gauges rely on a novel sensing mechanism that depends on strain-induced morphology changes. Once a compression or tension strain is applied to Graphene-based percolative strain gauges, the overlap area between neighboring flakes becomes smaller or larger, which is reflected by the considerable change of resistance. Tiny strain change on graphene-based percolative strain sensor can act as an important leverage to tremendously increase resistance of strain sensor, which equipped graphene-based percolative strain gauges with higher gauge factor. Despite ongoing research in the underlying sensing mechanism and the limits of sensitivity, neither suitable understanding has been obtained of what intrinsic factors play the key role in adjust gauge factor, nor explanation on how the strain gauge sensitivity can be enhanced, which is undoubtedly considerably meaningful and provides guideline to design novel and easy-produced strain sensor with high gauge factor. We here simulated the strain process by modeling graphene flakes and its percolative networks. We constructed the 3D resistance network by simulating overlapping process of graphene flakes and interconnecting tremendous number of resistance elements which were obtained by fractionizing each piece of graphene. With strain increasing, the overlapping graphenes was dislocated on new stretched simulation graphene flake simulation film and a new simulation resistance network was formed with smaller flake number density. By solving the resistance network, we can get the resistance of simulation film under different strain. Furthermore, by simulation on possible variable parameters, such as out-of-plane resistance, in-plane resistance, flake size, we obtained the changing tendency of gauge factor with all these variable parameters. Compared with the experimental data, we verified the feasibility of our model and analysis. The increase of out-of-plane resistance of graphene flake and the initial resistance of sensor, based on flake network, both improved gauge factor of sensor, while the smaller graphene flake size gave greater gauge factor. This work can not only serve as a guideline to improve the sensitivity and applicability of graphene-based strain sensors in the future, but also provides method to find the limitation of gauge factor for strain sensor based on graphene flake. Besides, our method can be easily transferred to predict gauge factor of strain sensor based on other nano-structured transparent optical conductors, such as nanowire and carbon nanotube, or of their hybrid with graphene flakes.

Keywords: graphene, gauge factor, percolative transport, strain sensor

Procedia PDF Downloads 416
453 Rapid Plasmonic Colorimetric Glucose Biosensor via Biocatalytic Enlargement of Gold Nanostars

Authors: Masauso Moses Phiri

Abstract:

Frequent glucose monitoring is essential to the management of diabetes. Plasmonic enzyme-based glucose biosensors have the advantages of greater specificity, simplicity and rapidity. The aim of this study was to develop a rapid plasmonic colorimetric glucose biosensor based on biocatalytic enlargement of AuNS guided by GOx. Gold nanoparticles of 18 nm in diameter were synthesized using the citrate method. Using these as seeds, a modified seeded method for the synthesis of monodispersed gold nanostars was followed. Both the spherical and star-shaped nanoparticles were characterized using ultra-violet visible spectroscopy, agarose gel electrophoresis, dynamic light scattering, high-resolution transmission electron microscopy and energy-dispersive X-ray spectroscopy. The feasibility of a plasmonic colorimetric assay through growth of AuNS by silver coating in the presence of hydrogen peroxide was investigated by several control and optimization experiments. Conditions for excellent sensing such as the concentration of the detection solution in the presence of 20 µL AuNS, 10 mM of 2-(N-morpholino) ethanesulfonic acid (MES), ammonia and hydrogen peroxide were optimized. Using the optimized conditions, the glucose assay was developed by adding 5mM of GOx to the solution and varying concentrations of glucose to it. Kinetic readings, as well as color changes, were observed. The results showed that the absorbance values of the AuNS were blue shifting and increasing as the concentration of glucose was elevated. Control experiments indicated no growth of AuNS in the absence of GOx, glucose or molecular O₂. Increased glucose concentration led to an enhanced growth of AuNS. The detection of glucose was also done by naked-eye. The color development was near complete in ± 10 minutes. The kinetic readings which were monitored at 450 and 560 nm showed that the assay could discriminate between different concentrations of glucose by ± 50 seconds and near complete at ± 120 seconds. A calibration curve for the qualitative measurement of glucose was derived. The magnitude of wavelength shifts and absorbance values increased concomitantly with glucose concentrations until 90 µg/mL. Beyond that, it leveled off. The lowest amount of glucose that could produce a blue shift in the localized surface plasmon resonance (LSPR) absorption maxima was found to be 10 – 90 µg/mL. The limit of detection was 0.12 µg/mL. This enabled the construction of a direct sensitivity plasmonic colorimetric detection of glucose using AuNS that was rapid, sensitive and cost-effective with naked-eye detection. It has great potential for transfer of technology for point-of-care devices.

Keywords: colorimetric, gold nanostars, glucose, glucose oxidase, plasmonic

Procedia PDF Downloads 152
452 Internet-Delivered Cognitive Behaviour Therapy for Depression Comorbid with Diabetes: Preliminary Findings

Authors: Lisa Robins, Jill Newby, Kay Wilhelm, Therese Fletcher, Jessica Smith, Trevor Ma, Adam Finch, Lesley Campbell, Jerry Greenfield, Gavin Andrews

Abstract:

Background:Depression treatment for people living with depression comorbid with diabetes is of critical importance for improving quality of life and diabetes self-management, however depression remains under-recognised and under-treated in this population. Cost—effective and accessible forms of depression treatment that can enhance the delivery of mental health services in routine diabetes care are needed. Provision of internet-delivered Cognitive Behaviour Therapy (iCBT) provides a promising way to deliver effective depression treatment to people with diabetes. Aims:To explore the outcomes of the clinician assisted iCBT program for people with comorbid Major Depressive Disorder (MDD) and diabetes compared to those who remain under usual care. The main hypotheses are that: (1) Participants in the treatment group would show a significant improvement on disorder specific measures (Patient Health Questionnaire; PHQ-9) relative to those in the control group; (2) Participants in the treatment group will show a decrease in diabetes-related distress relative to those in the control group. This study will also examine: (1) the effect of iCBT for MDD on disability (as measured by the SF-12 and SDS), general distress (as measured by the K10), (2) the feasibility of these treatments in terms of acceptability to diabetes patients and practicality for clinicians (as measured by the Credibility/Expectancy Questionnaire; CEQ). We hypothesise that associated disability, and general distress will reduce, and that patients with comorbid MDD and diabetes will rate the program as acceptable. Method:Recruit 100 people with MDD comorbid with diabetes (either Type 1 or Type 2), and randomly allocate to: iCBT (over 10 weeks) or treatment as usual (TAU) for 10 weeks, then iCBT. Measure pre- and post-intervention MDD severity, anxiety, diabetes-related distress, distress, disability, HbA1c, lifestyle, adherence, satisfaction with clinicians input and the treatment. Results:Preliminary results comparing MDD symptom levels, anxiety, diabetes-specific distress, distress, disability, HbA1c levels, and lifestyle factors from baseline to conclusion of treatment will be presented, as well as data on adherence to the lessons, homework downloads, satisfaction with the clinician's input and satisfaction with the mode of treatment generally.

Keywords: cognitive behaviour therapy, depression, diabetes, internet

Procedia PDF Downloads 489
451 The Influence of Infiltration and Exfiltration Processes on Maximum Wave Run-Up: A Field Study on Trinidad Beaches

Authors: Shani Brathwaite, Deborah Villarroel-Lamb

Abstract:

Wave run-up may be defined as the time-varying position of the landward extent of the water’s edge, measured vertically from the mean water level position. The hydrodynamics of the swash zone and the accurate prediction of maximum wave run-up, play a critical role in the study of coastal engineering. The understanding of these processes is necessary for the modeling of sediment transport, beach recovery and the design and maintenance of coastal engineering structures. However, due to the complex nature of the swash zone, there remains a lack of detailed knowledge in this area. Particularly, there has been found to be insufficient consideration of bed porosity and ultimately infiltration/exfiltration processes, in the development of wave run-up models. Theoretically, there should be an inverse relationship between maximum wave run-up and beach porosity. The greater the rate of infiltration during an event, associated with a larger bed porosity, the lower the magnitude of the maximum wave run-up. Additionally, most models have been developed using data collected on North American or Australian beaches and may have limitations when used for operational forecasting in Trinidad. This paper aims to assess the influence and significance of infiltration and exfiltration processes on wave run-up magnitudes within the swash zone. It also seeks to pay particular attention to how well various empirical formulae can predict maximum run-up on contrasting beaches in Trinidad. Traditional surveying techniques will be used to collect wave run-up and cross-sectional data on various beaches. Wave data from wave gauges and wave models will be used as well as porosity measurements collected using a double ring infiltrometer. The relationship between maximum wave run-up and differing physical parameters will be investigated using correlation analyses. These physical parameters comprise wave and beach characteristics such as wave height, wave direction, period, beach slope, the magnitude of wave setup, and beach porosity. Most parameterizations to determine the maximum wave run-up are described using differing parameters and do not always have a good predictive capability. This study seeks to improve the formulation of wave run-up by using the aforementioned parameters to generate a formulation with a special focus on the influence of infiltration/exfiltration processes. This will further contribute to the improvement of the prediction of sediment transport, beach recovery and design of coastal engineering structures in Trinidad.

Keywords: beach porosity, empirical models, infiltration, swash, wave run-up

Procedia PDF Downloads 357
450 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait

Authors: Obaid AlOtaibi, Salman Hussain

Abstract:

Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.

Keywords: Kuwait, renewable energy, spatial analysis, wind energy

Procedia PDF Downloads 146
449 Bionaut™: A Breakthrough Robotic Microdevice to Treat Non-Communicating Hydrocephalus in Both Adult and Pediatric Patients

Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher

Abstract:

Bionaut Labs, LLC is developing a minimally invasive robotic microdevice designed to treat non-communicating hydrocephalus in both adult and pediatric patients. The device utilizes biocompatible microsurgical particles (Bionaut™) that are specifically designed to safely and reliably perform accurate fenestration(s) in the 3rd ventricle, aqueduct of Sylvius, and/or trapped intraventricular cysts of the brain in order to re-establish normal cerebrospinal fluid flow dynamics and thereby balance and/or normalize intra/intercompartmental pressure. The Bionaut™ is navigated to the target via CSF or brain tissue in a minimally invasive fashion with precise control using real-time imaging. Upon reaching the pre-defined anatomical target, the external driver allows for directing the specific microsurgical action defined to achieve the surgical goal. Notable features of the proposed protocol are i) Bionaut™ access to the intraventricular target follows a clinically validated endoscopy trajectory which may not be feasible via ‘traditional’ rigid endoscopy: ii) the treatment is microsurgical, there are no foreign materials left behind post-procedure; iii) Bionaut™ is an untethered device that is navigated through the subarachnoid and intraventricular compartments of the brain, following pre-designated non-linear trajectories as determined by the safest anatomical and physiological path; iv) Overall protocol involves minimally invasive delivery and post-operational retrieval of the surgical Bionaut™. The approach is expected to be suitable to treat pediatric patients 0-12 months old as well as adult patients with obstructive hydrocephalus who fail traditional shunts or are eligible for endoscopy. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.

Keywords: Bionaut™, cerebrospinal fluid, CSF, fenestration, hydrocephalus, micro-robot, microsurgery

Procedia PDF Downloads 169
448 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico

Authors: Gustavo Cruz-Bello

Abstract:

Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.

Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability

Procedia PDF Downloads 113
447 The Effect of Air Filter Performance on Gas Turbine Operation

Authors: Iyad Al-Attar

Abstract:

Air filters are widely used in gas turbines applications to ensure that the large mass (500kg/s) of clean air reach the compressor. The continuous demand of high availability and reliability has highlighted the critical role of air filter performance in providing enhanced air quality. In addition to being challenged with different environments [tropical, coastal, hot], gas turbines confront wide array of atmospheric contaminants with various concentrations and particle size distributions that would lead to performance degradation and components deterioration. Therefore, the role of air filters is of a paramount importance since fouled compressor can reduce power output and availability of the gas turbine to over 70 % throughout operation. Consequently, accurate filter performance prediction is critical tool in their selection considering their role in minimizing the economic impact of outages. In fact, actual performance of Efficient Particulate Air [EPA] filters used in gas turbine tend to deviate from the performance predicted by laboratory results. This experimental work investigates the initial pressure drop and fractional efficiency curves of full-scale pleated V-shaped EPA filters used globally in gas turbine. The investigation involved examining the effect of different operational conditions such as flow rates [500 to 5000 m3/h] and design parameters such as pleat count [28, 30, 32 and 34 pleats per 100mm]. This experimental work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase of flow rates and pleat density. The reasons, which led to surface area losses of filtration media, are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. This paper also demonstrates that the effect of increasing the flow rate has more pronounced effect on filter performance compared to pleating density. This experimental work suggests that a valid comparison of the pleat densities should be based on the effective surface area, namely, the area that participates in the filtration process, and not the total surface area the pleat density provides. Throughout this study, optimal pleat count that satisfies both initial pressure drop and efficiency requirements may not have necessarily existed.

Keywords: filter efficiency, EPA Filters, pressure drop, permeability

Procedia PDF Downloads 239
446 Energy Efficient Refrigerator

Authors: Jagannath Koravadi, Archith Gupta

Abstract:

In a world with constantly growing energy prices, and growing concerns about the global climate changes caused by increased energy consumption, it is becoming more and more essential to save energy wherever possible. Refrigeration systems are one of the major and bulk energy consuming systems now-a-days in industrial sectors, residential sectors and household environment. Refrigeration systems with considerable cooling requirements consume a large amount of electricity and thereby contribute greatly to the running costs. Therefore, a great deal of attention is being paid towards improvement of the performance of the refrigeration systems in this regard throughout the world. The Coefficient of Performance (COP) of a refrigeration system is used for determining the system's overall efficiency. The operating cost to the consumer and the overall environmental impact of a refrigeration system in turn depends on the COP or efficiency of the system. The COP of a refrigeration system should therefore be as high as possible. Slight modifications in the technical elements of the modern refrigeration systems have the potential to reduce the energy consumption, and improvements in simple operational practices with minimal expenses can have beneficial impact on COP of the system. Thus, the challenge is to determine the changes that can be made in a refrigeration system in order to improve its performance, reduce operating costs and power requirement, improve environmental outcomes, and achieve a higher COP. The opportunity here, and a better solution to this challenge, will be to incorporate modifications in conventional refrigeration systems for saving energy. Energy efficiency, in addition to improvement of COP, can deliver a range of savings such as reduced operation and maintenance costs, improved system reliability, improved safety, increased productivity, better matching of refrigeration load and equipment capacity, reduced resource consumption and greenhouse gas emissions, better working environment, and reduced energy costs. The present work aims at fabricating a working model of a refrigerator that will provide for effective heat recovery from superheated refrigerant with the help of an efficient de-superheater. The temperature of the refrigerant and water in the de-super heater at different intervals of time are measured to determine the quantity of waste heat recovered. It is found that the COP of the system improves by about 6% with the de-superheater and the power input to the compressor decreases by 4 % and also the refrigeration capacity increases by 4%.

Keywords: coefficiency of performance, de-superheater, refrigerant, refrigeration capacity, heat recovery

Procedia PDF Downloads 320
445 Intersections and Cultural Landscape Interpretation, in the Case of Ancient Messene in the Peloponnese

Authors: E. Maistrou, P. Themelis, D. Kosmopoulos, K. Boulougoura, A. M. Konidi, K. Moretti

Abstract:

InterArch is an ongoing research project that is running since September 2020 and aims to propose a digital application for the enhancement of the cultural landscape, which emphasizes the contribution of physical space and time in digital data organization. The research case study refers to Ancient Messene in the Peloponnese, one of the most important archaeological sites in Greece. The project integrates an interactive approach to the natural environment, aiming at a manifold sensory experience. It combines the physical space of the archaeological site with the digital space of archaeological and cultural data while, at the same time, it embraces storytelling processes by engaging an interdisciplinary approach that familiarizes the user to multiple semantic interpretations. The research project is co‐financed by the European Union and Greek national funds, through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE – INNOVATE (project code: Τ2ΕΔΚ-01659). It involves mutual collaboration between academic and cultural institutions and the contribution of an IT applications development company. New technologies and the integration of digital data enable the implementation of non‐linear narratives related to the representational characteristics of the art of collage. Various images (photographs, drawings, etc.) and sounds (narrations, music, soundscapes, audio signs, etc.) could be presented according to our proposal through new semiotics of augmented and virtual reality technologies applied in touch screens and smartphones. Despite the fragmentation of tangible or intangible references, material landscape formations, including archaeological remains, constitute the common ground that can inspire cultural narratives in a process that unfolds personal perceptions and collective imaginaries. It is in this context that cultural landscape may be considered an indication of space and historical continuity. It is in this context that history could emerge, according to our proposal, not solely as a previous inscription but also as an actual happening. As a rhythm of occurrences suggesting mnemonic references and, moreover, evolving history projected on the contemporary ongoing cultural landscape.

Keywords: cultural heritage, digital data, landscape, archaeological sites, visitors’ itineraries

Procedia PDF Downloads 80
444 An Exploratory Case Study of the Transference of Skills and Dispositions Used by a Newly Qualified Teacher

Authors: Lynn Machin

Abstract:

Using the lens of a theoretical framework relating to learning to learn the intention of the case study was to explore how transferable the teaching and learning skills of a newly qualified teacher (post-compulsory education) were when used in an overseas, unfamiliar and challenging post-compulsory educational environment. Particularly, the research sought to explore how this newly qualified teacher made use of the skills developed during their teacher training and to ascertain if, and what, other skills were necessary in order for them to have a positive influence on their learners and for them to be able to thrive within a different country and learning milieu. This case study looks at the experience of a trainee teacher who recently qualified in the UK to teach in post compulsory education (i.e. post 16 education). Rather than gaining employment in a UK based academy or college of further education this newly qualified teacher secured her first employment as a teacher in a province in China. Moreover, the newly qualified teacher had limited travel experience and had never travelled to Asia. She was one of the quieter and more reserved members on the one year teacher training course and was the least likely of the group to have made the decision to work abroad. How transferable the pedagogical skills that she had gained during her training would be when used in a culturally different and therefore (to her, challenging) environment was a key focus of the study. Another key focus was to explore the dispositions being used by the newly qualified teacher in order for her to teach and to thrive in an overseas educational environment. The methodological approach used for this study was both interpretative and qualitative. Associated methods were: Observation: observing the wider and operational practice of the newly qualified teacher over a five day period, and their need, ability and willingness to be reflective, resilient, reciprocal and resourceful. Interview: semi-structured interview with the newly qualified teacher following the observation of her practice. Findings from this case study illuminate the modifications made by the newly qualified teacher to her bank of teaching and learning strategies as well as the essentiality of dispositions used by her to know how to learn and also, crucially, to be ready and willing to do so. Such dispositions include being resilient, resourceful, reciprocal and reflective; necessary in order to adapt to the emerging challenges encountered by the teacher during their first months of employment in China. It is concluded that developing the skills to teach is essential for good teaching and learning practices. Having dispositions that enable teachers to work in ever changing conditions and surroundings is, this paper argues, essential for transferability and longevity of use of these skills.

Keywords: learning, post-compulsory, resilience, transferable

Procedia PDF Downloads 292
443 Additive Manufacturing of Microstructured Optical Waveguides Using Two-Photon Polymerization

Authors: Leonnel Mhuka

Abstract:

Background: The field of photonics has witnessed substantial growth, with an increasing demand for miniaturized and high-performance optical components. Microstructured optical waveguides have gained significant attention due to their ability to confine and manipulate light at the subwavelength scale. Conventional fabrication methods, however, face limitations in achieving intricate and customizable waveguide structures. Two-photon polymerization (TPP) emerges as a promising additive manufacturing technique, enabling the fabrication of complex 3D microstructures with submicron resolution. Objectives: This experiment aimed to utilize two-photon polymerization to fabricate microstructured optical waveguides with precise control over geometry and dimensions. The objective was to demonstrate the feasibility of TPP as an additive manufacturing method for producing functional waveguide devices with enhanced performance. Methods: A femtosecond laser system operating at a wavelength of 800 nm was employed for two-photon polymerization. A custom-designed CAD model of the microstructured waveguide was converted into G-code, which guided the laser focus through a photosensitive polymer material. The waveguide structures were fabricated using a layer-by-layer approach, with each layer formed by localized polymerization induced by non-linear absorption of the laser light. Characterization of the fabricated waveguides included optical microscopy, scanning electron microscopy, and optical transmission measurements. The optical properties, such as mode confinement and propagation losses, were evaluated to assess the performance of the additive manufactured waveguides. Conclusion: The experiment successfully demonstrated the additive manufacturing of microstructured optical waveguides using two-photon polymerization. Optical microscopy and scanning electron microscopy revealed the intricate 3D structures with submicron resolution. The measured optical transmission indicated efficient light propagation through the fabricated waveguides. The waveguides exhibited well-defined mode confinement and relatively low propagation losses, showcasing the potential of TPP-based additive manufacturing for photonics applications. The experiment highlighted the advantages of TPP in achieving high-resolution, customized, and functional microstructured optical waveguides. Conclusion: his experiment substantiates the viability of two-photon polymerization as an innovative additive manufacturing technique for producing complex microstructured optical waveguides. The successful fabrication and characterization of these waveguides open doors to further advancements in the field of photonics, enabling the development of high-performance integrated optical devices for various applications

Keywords: Additive Manufacturing, Microstructured Optical Waveguides, Two-Photon Polymerization, Photonics Applications

Procedia PDF Downloads 100
442 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 239
441 Introduction to Two Artificial Boundary Conditions for Transient Seepage Problems and Their Application in Geotechnical Engineering

Authors: Shuang Luo, Er-Xiang Song

Abstract:

Many problems in geotechnical engineering, such as foundation deformation, groundwater seepage, seismic wave propagation and geothermal transfer problems, may involve analysis in the ground which can be seen as extending to infinity. To that end, consideration has to be given regarding how to deal with the unbounded domain to be analyzed by using numerical methods, such as finite element method (FEM), finite difference method (FDM) or finite volume method (FVM). A simple artificial boundary approach derived from the analytical solutions for transient radial seepage problems, is introduced. It should be noted, however, that the analytical solutions used to derive the artificial boundary are particular solutions under certain boundary conditions, such as constant hydraulic head at the origin or constant pumping rate of the well. When dealing with unbounded domains with unsteady boundary conditions, a more sophisticated artificial boundary approach to deal with the infinity of the domain is presented. By applying Laplace transforms and introducing some specially defined auxiliary variables, the global artificial boundary conditions (ABCs) are simplified to local ones so that the computational efficiency is enhanced significantly. The introduced two local ABCs are implemented in a finite element computer program so that various seepage problems can be calculated. The two approaches are first verified by the computation of a one-dimensional radial flow problem, and then tentatively applied to more general two-dimensional cylindrical problems and plane problems. Numerical calculations show that the local ABCs can not only give good results for one-dimensional axisymmetric transient flow, but also applicable for more general problems, such as axisymmetric two-dimensional cylindrical problems, and even more general planar two-dimensional flow problems for well doublet and well groups. An important advantage of the latter local boundary is its applicability for seepage under rapidly changing unsteady boundary conditions, and even the computational results on the truncated boundary are usually quite satisfactory. In this aspect, it is superior over the former local boundary. Simulation of relatively long operational time demonstrates to certain extents the numerical stability of the local boundary. The solutions of the two local ABCs are compared with each other and with those obtained by using large element mesh, which proves the satisfactory performance and obvious superiority over the large mesh model.

Keywords: transient seepage, unbounded domain, artificial boundary condition, numerical simulation

Procedia PDF Downloads 294