Search results for: reconfigurable filters
63 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery
Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas
Abstract:
The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition
Procedia PDF Downloads 15062 A Comparative Study of Simple and Pre-polymerized Fe Coagulants for Surface Water Treatment
Authors: Petros Gkotsis, Giorgos Stratidis, Manassis Mitrakas, Anastasios Zouboulis
Abstract:
This study investigates the use of original and pre-polymerized iron (Fe) reagents compared to the commonly applied polyaluminum chloride (PACl) coagulant for surface water treatment. Applicable coagulants included both ferric chloride (FeCl₃) and ferric sulfate (Fe₂(SO₄)₃) and their pre-polymerized Fe reagents, such as polyferric sulfate (PFS) and polyferric chloride (PFCl). The efficiency of coagulants was evaluated by the removal of natural organic matter (NOM) and suspended solids (SS), which were determined in terms of reducing the UV absorption at 254 nm and turbidity, respectively. The residual metal concentration (Fe and Al) was also measured. Coagulants were added at five concentrations (1, 2, 3, 4 and 5 mg/L) and three pH values (7.0, 7.3 and 7.6). Experiments were conducted in a jar-test device, with two types of synthetic surface water (i.e., of high and low organic strength) which consisted of humic acid (HA) and kaolin at different concentrations (5 mg/L and 50 mg/L). After the coagulation/flocculation process, clean water was separated with filters of pore size 0.45 μm. Filtration was also conducted before the addition of coagulants in order to compare the ‘net’ effect of the coagulation/flocculation process on the examined parameters (UV at 254 nm, turbidity, and residual metal concentration). Results showed that the use of PACl resulted in the highest removal of humics for both types of surface water. For the surface water of high organic strength (humic acid-kaolin, 50 mg/L-50 mg/L), the highest removal of humics was observed at the highest coagulant dosage of 5 mg/L and at pH=7. On the contrary, turbidity was not significantly affected by the coagulant dosage. However, the use of PACl decreased turbidity the most, especially when the surface water of high organic strength was employed. As expected, the application of coagulation/flocculation prior to filtration improved NOM removal but slightly affected turbidity. Finally, the residual Fe concentration (0.01-0.1 mg/L) was much lower than the residual Al concentration (0.1-0.25 mg/L).Keywords: coagulation/flocculation, iron and aluminum coagulants, metal salts, pre-polymerized coagulants, surface water treatment
Procedia PDF Downloads 15461 The Comparative Electroencephalogram Study: Children with Autistic Spectrum Disorder and Healthy Children Evaluate Classical Music in Different Ways
Authors: Galina Portnova, Kseniya Gladun
Abstract:
In our EEG experiment participated 27 children with ASD with the average age of 6.13 years and the average score for CARS 32.41 and 25 healthy children (of 6.35 years). Six types of musical stimulation were presented, included Gluck, Javier-Naida, Kenny G, Chopin and other classic musical compositions. Children with autism showed orientation reaction to the music and give behavioral responses to different types of music, some of them might assess stimulation by scales. The participants were instructed to remain calm. Brain electrical activity was recorded using a 19-channel EEG recording device, 'Encephalan' (Russia, Taganrog). EEG epochs lasting 150 s were analyzed using EEGLab plugin for MatLab (Mathwork Inc.). For EEG analysis we used Fast Fourier Transform (FFT), analyzed Peak alpha frequency (PAF), correlation dimension D2 and Stability of rhythms. To express the dynamics of desynchronizing of different rhythms we've calculated the envelope of the EEG signal, using the whole frequency range and a set of small narrowband filters using Hilbert transformation. Our data showed that healthy children showed similar EEG spectral changes during musical stimulation as well as described the feelings induced by musical fragments. The exception was the ‘Chopin. Prelude’ fragment (no.6). This musical fragment induced different subjective feeling, behavioral reactions and EEG spectral changes in children with ASD and healthy children. The correlation dimension D2 was significantly lower in autists compared to healthy children during musical stimulation. Hilbert envelope frequency was reduced in all group of subjects during musical compositions 1,3,5,6 compositions compared to the background. During musical fragments 2 and 4 (terrible) lower Hilbert envelope frequency was observed only in children with ASD and correlated with the severity of the disease. Alfa peak frequency was lower compared to the background during this musical composition in healthy children and conversely higher in children with ASD.Keywords: electroencephalogram (EEG), emotional perception, ASD, musical perception, childhood Autism rating scale (CARS)
Procedia PDF Downloads 28460 Evidence of Microplastics Ingestion in Two Commercial Cephalopod Species: Octopus Vulgaris and Sepia Officinalis
Authors: Federica Laface, Cristina Pedà, Francesco Longo, Francesca de Domenico, Riccardo Minichino, Pierpaolo Consoli, Pietro Battaglia, Silvestro Greco, Teresa Romeo
Abstract:
Plastics pollution represents one of the most important threats to marine biodiversity. In the last decades, different species are investigated to evaluate the extent of the plastic ingestion phenomenon. Even if the cephalopods play an important role in the food chain, they are still poorly studied. The aim of this research was to investigate the plastic ingestion in two commercial cephalopod species from the southern Tyrrhenian Sea: the common octopus, Octopus vulgaris (n=6; mean mantle length ML 10.7 ± 1.8) and the common cuttlefish, Sepia officinalis (n=13; mean ML 13.2 ± 1.7). Plastics were extracted from the filters obtained by the chemical digestion of cephalopods gastrointestinal tracts (GITs), using 10% potassium hydroxide (KOH) solution in a 1:5 (w/v) ratio. Once isolated, particles were photographed, measured, and their size class, shape and color were recorded. A total of 81 items was isolated from 16 of the 19 examined GITs, representing a total occurrence (%O) of 84.2% with a mean value of 4.3 ± 8.6 particles per individual. In particular, 62 plastics were found in 6 specimens of O. vulgaris (%O=100) and 19 particles in 10 S. officinalis (%O=94.7). In both species, the microplastics size class was the most abundant (93.8%). Plastic items found in O. vulgaris were mainly fibers (61%) while fragments were the most frequent in S. officinalis (53%). Transparent was the most common color in both species. The analysis will be completed by Fourier transform infrared (FT-IR) spectroscopy technique in order to identify polymers nature. This study reports preliminary data on plastic ingestion events in two cephalopods species and represents the first record of plastic ingestion by the common octopus. Microplastic items detected in both common octopus and common cuttlefish could derive from secondary and/or accidental ingestion events, probably due to their behavior, feeding habits and anatomical features. Further studies will be required to assess the effect of marine litter pollution in these ecologically and commercially important species.Keywords: cephalopods, GIT analysis, marine pollution, Mediterranean sea, microplastics
Procedia PDF Downloads 25459 Simulation of Antimicrobial Resistance Gene Fate in Narrow Grass Hedges
Authors: Marzieh Khedmati, Shannon L. Bartelt-Hunt
Abstract:
Vegetative Filter Strips (VFS) are used for controlling the volume of runoff and decreasing contaminant concentrations in runoff before entering water bodies. Many studies have investigated the role of VFS in sediment and nutrient removal, but little is known about their efficiency for the removal of emerging contaminants such as antimicrobial resistance genes (ARGs). Vegetative Filter Strip Modeling System (VFSMOD) was used to simulate the efficiency of VFS in this regard. Several studies demonstrated the ability of VFSMOD to predict reductions in runoff volume and sediment concentration moving through the filters. The objectives of this study were to calibrate the VFSMOD with experimental data and assess the efficiency of the model in simulating the filter behavior in removing ARGs (ermB) and tylosin. The experimental data were obtained from a prior study conducted at the University of Nebraska (UNL) Rogers Memorial Farm. Three treatment factors were tested in the experiments, including manure amendment, narrow grass hedges and rainfall events. Sediment Delivery Ratio (SDR) was defined as the filter efficiency and the related experimental and model values were compared to each other. The VFS Model generally agreed with the experimental results and as a result, the model was used for predicting filter efficiencies when the runoff data are not available. Narrow Grass Hedges (NGH) were shown to be effective in reducing tylosin and ARGs concentration. The simulation showed that the filter efficiency in removing ARGs is different for different soil types and filter lengths. There is an optimum length for the filter strip that produces minimum runoff volume. Based on the model results increasing the length of the filter by 1-meter leads to higher efficiency but widening beyond that decreases the efficiency. The VFSMOD, which was proved to work well in estimation of VFS trapping efficiency, showed confirming results for ARG removal.Keywords: antimicrobial resistance genes, emerging contaminants, narrow grass hedges, vegetative filter strips, vegetative filter strip modeling system
Procedia PDF Downloads 13258 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 17657 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering
Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott
Abstract:
Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.Keywords: cancer research, graph theory, machine learning, single cell analysis
Procedia PDF Downloads 11256 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.Keywords: altitude estimation, drone, image processing, trajectory planning
Procedia PDF Downloads 11355 Synchronous Reference Frame and Instantaneous P-Q Theory Based Control of Unified Power Quality Conditioner for Power Quality Improvement of Distribution System
Authors: Ambachew Simreteab Gebremedhn
Abstract:
Context: The paper explores the use of synchronous reference frame theory (SRFT) and instantaneous reactive power theory (IRPT) based control of Unified Power Quality Conditioner (UPQC) for improving power quality in distribution systems. Research Aim: To investigate the performance of different control configurations of UPQC using SRFT and IRPT for mitigating power quality issues in distribution systems. Methodology: The study compares three control techniques (SRFT-IRPT, SRFT-SRFT, IRPT-IRPT) implemented in series and shunt active filters of UPQC. Data is collected under various control algorithms to analyze UPQC performance. Findings: Results indicate the effectiveness of SRFT and IRPT based control techniques in addressing power quality problems such as voltage sags, swells, unbalance, harmonics, and current harmonics in distribution systems. Theoretical Importance: The study provides insights into the application of SRFT and IRPT in improving power quality, specifically in mitigating unbalanced voltage sags, where conventional methods fall short. Data Collection: Data is collected under various control algorithms using simulation in MATLAB Simulink and real-time operation executed with experimental results obtained using RT-LAB. Analysis Procedures: Performance analysis of UPQC under different control algorithms is conducted to evaluate the effectiveness of SRFT and IRPT based control techniques in mitigating power quality issues. Questions Addressed: How do SRFT and IRPT based control techniques compare in improving power quality in distribution systems? What is the impact of using different control configurations on the performance of UPQC? Conclusion: The study demonstrates the efficacy of SRFT and IRPT based control of UPQC in mitigating power quality issues in distribution systems, highlighting their potential for enhancing voltage and current quality.Keywords: power quality, UPQC, shunt active filter, series active filter, non-linear load, RT-LAB, MATLAB
Procedia PDF Downloads 854 Investigating the Editing's Effect of Advertising Photos on the Virtual Purchase Decision Based on the Quantitative Electroencephalogram (EEG) Parameters
Authors: Parya Tabei, Maryam Habibifar
Abstract:
Decision-making is an important cognitive function that can be defined as the process of choosing an option among available options to achieve a specific goal. Consumer ‘need’ is the main reason for purchasing decisions. Human decision-making while buying products online is subject to various factors, one of which is the quality and effect of advertising photos. Advertising photo editing can have a significant impact on people's virtual purchase decisions. This technique helps improve the quality and overall appearance of photos by adjusting various aspects such as brightness, contrast, colors, cropping, resizing, and adding filters. This study, by examining the effect of editing advertising photos on the virtual purchase decision using EEG data, tries to investigate the effect of edited images on the decision-making of customers. A group of 30 participants were asked to react to 24 edited and unedited images while their EEG was recorded. Analysis of the EEG data revealed increased alpha wave activity in the occipital regions (O1, O2) for both edited and unedited images, which is related to visual processing and attention. Additionally, there was an increase in beta wave activity in the frontal regions (FP1, FP2, F4, F8) when participants viewed edited images, suggesting involvement in cognitive processes such as decision-making and evaluating advertising content. Gamma wave activity also increased in various regions, especially the frontal and parietal regions, which are associated with higher cognitive functions, such as attention, memory, and perception, when viewing the edited images. While the visual processing reflected by alpha waves remained consistent across different visual conditions, editing advertising photos appeared to boost neural activity in frontal and parietal regions associated with decision-making processes. These Findings suggest that photo editing could potentially influence consumer perceptions during virtual shopping experiences by modulating brain activity related to product assessment and purchase decisions.Keywords: virtual purchase decision, advertising photo, EEG parameters, decision Making
Procedia PDF Downloads 5053 The Concentration of Selected Cosmogenic and Anthropogenic Radionuclides in the Ground Layer of the Atmosphere (Polar and Mid-Latitudes Regions)
Authors: A. Burakowska, M. Piotrowski, M. Kubicki, H. Trzaskowska, R. Sosnowiec, B. Myslek-Laurikainen
Abstract:
The most important source of atmospheric radioactivity are radionuclides generated as a result of the impact of primary and secondary cosmic radiation, with the nuclei of nitrogen oxygen and carbon in the upper troposphere and lower stratosphere. This creates about thirty radioisotopes of more than twenty elements. For organisms, the four of them are most important: ³H, ⁷Be, ²²Na, ¹⁴C. The natural radionuclides, which are present in Earth crust, also settle on dust and particles of water vapor. By this means, the derivatives of uranium and thorium, and long-life 40K get into the air. ¹³⁷Cs is the most widespread isotope, that is implemented by humans into the environment. To determine the concentration of radionuclides in the atmosphere, high volume air samplers were used, where the aerosol collection took place on a special filter fabric (Petrianov filter tissue FPP-15-1.5). In 2002 the high volume air sampler AZA-1000 was installed at the Polish Polar Observatory of the Polish Academy of Science in Hornsund, Spitsbergen (77°00’N, 15°33’E), designed to operate in all weather conditions of the cold polar region. Since 1991 (with short breaks) the ASS-500 air sampler has been working, which is located in Swider at the Kalinowski Geophysical Observatory of Geophysics Institute of the Polish Academy of Science (52°07’N, 21°15’E). The following results of radionuclides concentrations were obtained from both stations using gamma spectroscopy analysis: ⁷Be, ¹³⁷Cs, ¹³⁴Cs, ²¹⁰Pb, ⁴⁰K. For gamma spectroscopy analysis HPGe (High Purity Germanium) detector were used. These data were compared with each other. The preliminary results gave evidence that radioactivity measured in aerosols is not proportional to the amount of dust for both studied regions. Furthermore, the results indicate annual variability (seasonal fluctuations) as well as a decrease in the average activity of ⁷Be with increasing latitude. The content of ⁷Be in surface air also indicates the relationship with solar activity cycles.Keywords: aerosols, air filters, atmospheric beryllium, environmental radionuclides, gamma spectroscopy, mid-latitude regions radionuclides, polar regions radionuclides, solar cycles
Procedia PDF Downloads 14052 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 40051 Enhanced Cytotoxic Effect of Expanded NK Cells with IL12 and IL15 from Leukoreduction Filter on K562 Cell Line Exhibits Comparable Cytotoxicity to Whole Blood
Authors: Abdulbaset Mazarzaei
Abstract:
Natural killer (NK) cells are innate immune effectors that play a pivotal role in combating tumors and infected cells. In recent years, the therapeutic potential of NK cells has gained significant attention due to their remarkable cytotoxic ability. This study focuses on investigating the cytotoxic effect of expanded NK cells enriched with interleukin 12 (IL12) and interleukin 15 (IL15), derived from the leukoreduction filter, on the K562 cell line. Firstly, NK cells were isolated from whole blood samples obtained from healthy volunteers. These cells were subsequently expanded ex vivo using a combination of feeder cells, IL12, and IL15. The expanded NK cells were then harvested and assessed for their cytotoxicity against K562, a well-established human chronic myelogenous leukemia cell line. The cytotoxicity was evaluated using flow cytometry assay. Results demonstrate that the expanded NK cells significantly exhibited enhanced cytotoxicity against K562 cells compared to non-expanded NK cells. Interestingly, the expanded NK cells derived specifically from IL12 and IL15-enriched leukoreduction filters showed a robust cytotoxic effect similar to the whole blood-derived NK cells. These findings suggest that IL12 and IL15 in the leukoreduction filter are crucial in promoting NK cell cytotoxicity. Furthermore, the expanded NK cells displayed relatively similar cytotoxicity profiles to whole blood-derived NK cells, indicating their comparable capability in targeting and eliminating tumor cells. This observation is of significant relevance as expanded NK cells from the leukoreduction filter could potentially serve as a readily accessible and efficient source for adoptive immunotherapy. In conclusion, this study highlights the significant cytotoxic effect of expanded NK cells enriched with IL12 and IL15 obtained from the leukoreduction filter on the K562 cell line. Moreover, it emphasizes that these expanded NK cells exhibit comparable cytotoxicity to whole blood-derived NK cells. These findings reinforce the potential clinical utility of using expanded NK cells from the leukoreduction filter as an effective strategy in adoptive immunotherapy for the treatment of cancer. Further studies are warranted to explore the broader implications of this approach in clinical settings.Keywords: natural killer (NK) cells, Cytotoxicity, Leukoreduction filter, IL-12 and IL-15 Cytokines
Procedia PDF Downloads 6450 Pilot Scale Investigation on the Removal of Pollutants from Secondary Effluent to Meet Botswana Irrigation Standards Using Roughing and Slow Sand Filters
Authors: Moatlhodi Wise Letshwenyo, Lesedi Lebogang
Abstract:
Botswana is an arid country that needs to start reusing wastewater as part of its water security plan. Pilot scale slow sand filtration in combination with roughing filter was investigated for the treatment of effluent from Botswana International University of Science and Technology to meet Botswana irrigation standards. The system was operated at hydraulic loading rates of 0.04 m/hr and 0.12 m/hr. The results show that the system was able to reduce turbidity from 262 Nephelometric Turbidity Units to a range between 18 and 0 Nephelometric Turbidity Units which was below 30 Nephelometric Turbidity Units threshold limit. The overall efficacy ranged between 61% and 100%. Suspended solids, Biochemical Oxygen Demand, and Chemical Oxygen Demand removal efficiency averaged 42.6%, 45.5%, and 77% respectively and all within irrigation standards. Other physio-chemical parameters were within irrigation standards except for bicarbonate ion which averaged 297.7±44 mg L-1 in the influent and 196.22±50 mg L-1 in the effluent which was above the limit of 92 mg L-1, therefore averaging a reduction of 34.1% by the system. Total coliforms, fecal coliforms, and Escherichia coli in the effluent were initially averaging 1.1 log counts, 0.5 log counts, and 1.3 log counts respectively compared to corresponding influent log counts of 3.4, 2.7 and 4.1, respectively. As time passed, it was observed that only roughing filter was able to reach reductions of 97.5%, 86% and 100% respectively for faecal coliforms, Escherichia coli, and total coliforms. These organism numbers were observed to have increased in slow sand filter effluent suggesting multiplication in the tank. Water quality index value of 22.79 for the physio-chemical parameters suggests that the effluent is of excellent quality and can be used for irrigation purposes. However, the water quality index value for the microbial parameters (1820) renders the quality unsuitable for irrigation. It is concluded that slow sand filtration in combination with roughing filter is a viable option for the treatment of secondary effluent for reuse purposes. However, further studies should be conducted especially for the removal of microbial parameters using the system.Keywords: irrigation, slow sand filter, turbidity, wastewater reuse
Procedia PDF Downloads 15349 CFD-DEM Modelling of Liquid Fluidizations of Ellipsoidal Particles
Authors: Esmaeil Abbaszadeh Molaei, Zongyan Zhou, Aibing Yu
Abstract:
The applications of liquid fluidizations have been increased in many parts of industries such as particle classification, backwashing of granular filters, crystal growth, leaching and washing, and bioreactors due to high-efficient liquid–solid contact, favorable mass and heat transfer, high operation flexibilities, and reduced back mixing of phases. In most of these multiphase operations the particles properties, i.e. size, density, and shape, may change during the process because of attrition, coalescence or chemical reactions. Previous studies, either experimentally or numerically, mainly have focused on studies of liquid-solid fluidized beds containing spherical particles; however, the role of particle shape on the hydrodynamics of liquid fluidized beds is still not well-known. A three-dimensional Discrete Element Model (DEM) and Computational Fluid Dynamics (CFD) are coupled to study the influence of particles shape on particles and liquid flow patterns in liquid-solid fluidized beds. In the simulations, ellipsoid particles are used to study the shape factor since they can represent a wide range of particles shape from oblate and sphere to prolate shape particles. Different particle shapes from oblate (disk shape) to elongated particles (rod shape) are selected to investigate the effect of aspect ratio on different flow characteristics such as general particles and liquid flow pattern, pressure drop, and particles orientation. First, the model is verified based on experimental observations, then further detail analyses are made. It was found that spherical particles showed a uniform particle distribution in the bed, which resulted in uniform pressure drop along the bed height. However for particles with aspect ratios less than one (disk-shape), some particles were carried into the freeboard region, and the interface between the bed and freeboard was not easy to be determined. A few particle also intended to leave the bed. On the other hand, prolate particles showed different behaviour in the bed. They caused unstable interface and some flow channeling was observed for low liquid velocities. Because of the non-uniform particles flow pattern for particles with aspect ratios lower (oblate) and more (prolate) than one, the pressure drop distribution in the bed was not observed as uniform as what was found for spherical particles.Keywords: CFD, DEM, ellipsoid, fluidization, multiphase flow, non-spherical, simulation
Procedia PDF Downloads 31048 Real-Time Radiological Monitoring of the Atmosphere Using an Autonomous Aerosol Sampler
Authors: Miroslav Hyza, Petr Rulik, Vojtech Bednar, Jan Sury
Abstract:
An early and reliable detection of an increased radioactivity level in the atmosphere is one of the key aspects of atmospheric radiological monitoring. Although the standard laboratory procedures provide detection limits as low as few µBq/m³, their major drawback is the delayed result reporting: typically a few days. This issue is the main objective of the HAMRAD project, which gave rise to a prototype of an autonomous monitoring device. It is based on the idea of sequential aerosol sampling using a carrousel sample changer combined with a gamma-ray spectrometer. In our hardware configuration, the air is drawn through a filter positioned on the carrousel so that it could be rotated into the measuring position after a preset sampling interval. Filter analysis is performed via a 50% HPGe detector inside an 8.5cm lead shielding. The spectrometer output signal is then analyzed using DSP electronics and Gamwin software with preset nuclide libraries and other analysis parameters. After the counting, the filter is placed into a storage bin with a capacity of 250 filters so that the device can run autonomously for several months depending on the preset sampling frequency. The device is connected to a central server via GPRS/GSM where the user can view monitoring data including raw spectra and technological data describing the state of the device. All operating parameters can be remotely adjusted through a simple GUI. The flow rate is continuously adjustable up to 10 m³/h. The main challenge in spectrum analysis is the natural background subtraction. As detection limits are heavily influenced by the deposited activity of radon decay products and the measurement time is fixed, there must exist an optimal sample decay time (delayed spectrum acquisition). To solve this problem, we adopted a simple procedure based on sequential spectrum acquisition and optimal partial spectral sum with respect to the detection limits for a particular radionuclide. The prototyped device proved to be able to detect atmospheric contamination at the level of mBq/m³ per an 8h sampling.Keywords: aerosols, atmosphere, atmospheric radioactivity monitoring, autonomous sampler
Procedia PDF Downloads 14847 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method
Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota
Abstract:
The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.Keywords: BOS method, underexpanded jet, abel transformation, density field visualization
Procedia PDF Downloads 7846 Nonlinear Evolution of the Pulses of Elastic Waves in Geological Materials
Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas
Abstract:
Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus ‘GEOSCAN-02M’. Ultrasonic pulses are excited by the pulses of Q-switched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach-Mayergoyz and can be used for the location of cracks in the optically opaque materials.Keywords: cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock
Procedia PDF Downloads 35045 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 20244 Delineation of Subsurface Tectonic Structures Using Gravity, Magnetic and Geological Data, in the Sarir-Hameimat Arm of the Sirt Basin, NE Libya
Authors: Mohamed Abdalla Saleem, Hana Ellafi
Abstract:
The study area is located in the eastern part of the Sirt Basin, in the Sarir-Hameimat arm of the basin, south of Amal High. The area covers the northern part of the Hamemat Trough and the Rakb High. All of these tectonic elements are part of the major and common tectonics that were created when the old Sirt Arch collapsed, and most of them are trending NW-SE. This study has been conducted to investigate the subsurface structures and the sedimentology characterization of the area and attempt to define its development tectonically and stratigraphically. About 7600 land gravity measurements, 22500 gridded magnetic data, and petrographic core data from some wells were used to investigate the subsurface structural features both vertically and laterally. A third-order separation of the regional trends from the original Bouguer gravity data has been chosen. The residual gravity map reveals a significant number of high anomalies distributed in the area, separated by a group of thick sediment centers. The reduction to the pole magnetic map also shows nearly the same major trends and anomalies in the area. Applying the further interpretation filters reveals that these high anomalies are sourced from different depth levels; some are deep-rooted, and others are intruded igneous bodies within the sediment layers. The petrographic sedimentology study for some wells in the area confirmed the presence of these igneous bodies and defined their composition as most likely to be gabbro hosted by marine shale layers. Depth investigation of these anomalies by the average depth spectrum shows that the average basement depth is about 7.7 km, while the top of the intrusions is about 2.65 km, and some near-surface magnetic sources are about 1.86 km. The depth values of the magnetic anomalies and their location were estimated specifically using the 3D Euler deconvolution technique. The obtained results suggest that the maximum depth of the sources is about 4938m. The total horizontal gradient of the magnetic data shows that the trends are mostly extending NW-SE, others are NE-SW, and a third group has an N-S extension. This variety in trend direction shows that the area experienced different tectonic regimes throughout its geological history.Keywords: sirt basin, tectonics, gravity, magnetic
Procedia PDF Downloads 6643 Atmospheric Polycyclic Aromatic Hydrocarbons (PAHs) in Rural and Urban of Central Taiwan
Authors: Shih Yu Pan, Pao Chen Hung, Chuan Yao Lin, Charles C.-K. Chou, Yu Chi Lin, Kai Hsien Chi
Abstract:
This study analyzed 16 atmospheric PAHs species which were controlled by USEPA and IARC. To measure the concentration of PAHs, four rural sampling sites and two urban sampling sites were selected in Central Taiwan during spring and summer. In central Taiwan, the rural sampling stations were located in the downstream of Da-An River, Da-Jang River, Wu River and Chuo-shui River. On the other hand, the urban sampling sites were located in Taichung district and close to the roadside. Ambient air samples of both vapor phase and particle phase of PAHs compounds were collected using high volume sampling trains (Analitica). The sampling media were polyurethane foam (PUF) with XAD2 and quartz fiber filters. Diagnostic ratio, Principal component analysis (PCA), Positive Matrix Factorization (PMF) models were used to evaluate the apportionment of PAHs in the atmosphere and speculate the relative contribution of various emission sources. Because of the high temperature and low wind speed, high PAHs concentration in the atmosphere was observed. The total PAHs concentration, especially in vapor phase, had significant change during summer. During the sampling periods the total PAHs concentration of atmospheric at four rural and two urban sampling sites in spring and summer were 3.70±0.40 ng/m3,3.40±0.63 ng/m3,5.22±1.24 ng/m3,7.23±0.37 ng/m3,7.46±2.36 ng/m3,6.21±0.55 ng/m3 ; 15.0± 0.14 ng/m3,18.8±8.05 ng/m3,20.2±8.58 ng/m3,16.1±3.75 ng/m3,29.8±10.4 ng/m3,35.3±11.8 ng/m3, respectively. In order to identify PAHs sources, we used diagnostic ratio to classify the emission sources. The potential sources were diesel combustion and gasoline combustion in spring and summer, respectively. According to the principal component analysis (PCA), the PC1 and PC2 had 23.8%, 20.4% variance and 21.3%, 17.1% variance in spring and summer, respectively. Especially high molecular weight PAHs (BaP, IND, BghiP, Flu, Phe, Flt, Pyr) were dominated in spring when low molecular weight PAHs (AcPy, Ant, Acp, Flu) because of the dominating high temperatures were dominated in the summer. Analysis by using PMF model found the sources of PAHs in spring were stationary sources (34%), vehicle emissions (24%), coal combustion (23%) and petrochemical fuel gas (19%), while in summer the emission sources were petrochemical fuel gas (34%), the natural environment of volatile organic compounds (29%), coal combustion (19%) and stationary sources (18%).Keywords: PAHs, source identification, diagnostic ratio, principal component analysis, positive matrix factorization
Procedia PDF Downloads 26742 Evaluation of the Phenolic Composition of Curcumin from Different Turmeric (Curcuma longa L.) Extracts: A Comprehensive Study Based on Chemical Turmeric Extract, Turmeric Tea and Fresh Turmeric Juice
Authors: Beyza Sukran Isik, Gokce Altin, Ipek Yalcinkaya, Evren Demircan, Asli Can Karaca, Beraat Ozcelik
Abstract:
Turmeric (Curcuma longa L.), is used as a food additive (spice), preservative and coloring agent in Asian countries, including China and South East Asia. It is also considered as a medicinal plant. Traditional Indian medicine evaluates turmeric powder for the treatment of biliary disorders, rheumatism, and sinusitis. It has rich polyphenol content. Turmeric has yellow color mainly because of the presence of three major pigments; curcumin 1,7-bis(4-hydroxy-3-methoxyphenyl)-1, 6-heptadiene-3,5-dione), demethoxy-curcumin and bis demothoxy-curcumin. These curcuminoids are recognized to have high antioxidant activities. Curcumin is the major constituent of Curcuma species. Method: To prepare turmeric tea, 0.5 gram of turmeric powder was brewed with 250 ml of water at 90°C, 10 minutes. 500 grams of fresh turmeric washed and shelled prior to squeezing. Both turmeric tea and turmeric juice pass through 45 lm filters and stored at -20°C in the dark for further analyses. Curcumin was extracted from 20 grams of turmeric powder by 70 ml ethanol solution (95:5 ethanol/water v/v) in a water bath at 80°C, 6 hours. Extraction was contributed for 2 hours at the end of 6 hours by addition of 30 ml ethanol. Ethanol was removed by rotary evaporator. Remained extract stored at -20°C in the dark. Total phenolic content and phenolic profile were determined by spectrophotometric analysis and ultra-fast liquid chromatography (UFLC), respectively. Results: The total phenolic content of ethanolic extract of turmeric, turmeric juice, and turmeric tea were determined 50.72, 31.76 and 29.68 ppt, respectively. The ethanolic extract of turmeric, turmeric juice, and turmeric tea have been injected into UFLC and analyzed for curcumin contents. The curcumin content in ethanolic extract of turmeric, turmeric juice, and turmeric tea were 4067.4, 156.7 ppm and 1.1 ppm, respectively. Significance: Turmeric is known as a good source of curcumin. According to the results, it can be stated that its tea is not sufficient way for curcumin consumption. Turmeric juice can be preferred to turmeric tea for higher curcumin content. Ethanolic extract of turmeric showed the highest content of turmeric in both spectrophotometric and chromatographic analyses. Nonpolar solvents and carriers which have polar binding sites have to be considered for curcumin consumption due to its nonpolar nature.Keywords: phenolic compounds, spectrophotometry, turmeric, UFLC
Procedia PDF Downloads 20041 Limbic Involvement in Visual Processing
Authors: Deborah Zelinsky
Abstract:
The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing
Procedia PDF Downloads 8540 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops
Authors: Simon Komesker, Achim Wagner, Martin Ruskowski
Abstract:
In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.Keywords: holonic manufacturing system, modular production system, planning, and control, system structure
Procedia PDF Downloads 16939 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.Keywords: fire prediction, drone, smoke toxicity, analyser, fire management
Procedia PDF Downloads 8938 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 7737 Indirect Genotoxicity of Diesel Engine Emission: An in vivo Study Under Controlled Conditions
Authors: Y. Landkocz, P. Gosset, A. Héliot, C. Corbière, C. Vendeville, V. Keravec, S. Billet, A. Verdin, C. Monteil, D. Préterre, J-P. Morin, F. Sichel, T. Douki, P. J. Martin
Abstract:
Air Pollution produced by automobile traffic is one of the main sources of pollutants in urban atmosphere and is largely due to exhausts of the diesel engine powered vehicles. The International Agency for Research on Cancer, which is part of the World Health Organization, classified in 2012 diesel engine exhaust as carcinogenic to humans (Group 1), based on sufficient evidence that exposure is associated with an increased risk for lung cancer. Amongst the strategies aimed at limiting exhausts in order to take into consideration the health impact of automobile pollution, filtration of the emissions and use of biofuels are developed, but their toxicological impact is largely unknown. Diesel exhausts are indeed complex mixtures of toxic substances difficult to study from a toxicological point of view, due to both the necessary characterization of the pollutants, sampling difficulties, potential synergy between the compounds and the wide variety of biological effects. Here, we studied the potential indirect genotoxicity of emission of Diesel engines through on-line exposure of rats in inhalation chambers to a subchronic high but realistic dose. Following exposure to standard gasoil +/- rapeseed methyl ester either upstream or downstream of a particle filter or control treatment, rats have been sacrificed and their lungs collected. The following indirect genotoxic parameters have been measured: (i) telomerase activity and telomeres length associated with rTERT and rTERC gene expression by RT-qPCR on frozen lungs, (ii) γH2AX quantification, representing double-strand DNA breaks, by immunohistochemistry on formalin fixed-paraffin embedded (FFPE) lung samples. These preliminary results will be then associated with global cellular response analyzed by pan-genomic microarrays, monitoring of oxidative stress and the quantification of primary DNA lesions in order to identify biological markers associated with a potential pro-carcinogenic response of diesel or biodiesel, with or without filters, in a relevant system of in vivo exposition.Keywords: diesel exhaust exposed rats, γH2AX, indirect genotoxicity, lung carcinogenicity, telomerase activity, telomeres length
Procedia PDF Downloads 39036 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution
Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques
Abstract:
The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)
Procedia PDF Downloads 29635 A Green Optically Active Hydrogen and Oxygen Generation System Employing Terrestrial and Extra-Terrestrial Ultraviolet Solar Irradiance
Authors: H. Shahid
Abstract:
Due to Ozone layer depletion on earth, the incoming ultraviolet (UV) radiation is recorded at its high index levels such as 25 in South Peru (13.5° S, 3360 m a.s.l.) Also, the planning of human inhabitation on Mars is under discussion where UV radiations are quite high. The exposure to UV is health hazardous and is avoided by UV filters. On the other hand, artificial UV sources are in use for water thermolysis to generate Hydrogen and Oxygen, which are later used as fuels. This paper presents the utility of employing UVA (315-400nm) and UVB (280-315nm) electromagnetic radiation from the solar spectrum to design and implement an optically active, Hydrogen and Oxygen generation system via thermolysis of desalinated seawater. The proposed system finds its utility on earth and can be deployed in the future on Mars (UVB). In this system, by using Fresnel lens arrays as an optical filter and via active tracking, the ultraviolet light from the sun is concentrated and then allowed to fall on two sub-systems of the proposed system. The first sub-system generates electrical energy by using UV based tandem photovoltaic cells such as GaAs/GaInP/GaInAs/GaInAsP and the second elevates temperature of water to lower the electric potential required to electrolyze the water. An empirical analysis is performed at 30 atm and an electrical potential is observed to be the main controlling factor for the rate of production of Hydrogen and Oxygen and hence the operating point (Q-Point) of the proposed system. The hydrogen production rate in the case of the commercial system in static mode (650ᵒC, 0.6V) is taken as a reference. The silicon oxide electrolyzer cell (SOEC) is used in the proposed (UV) system for the Hydrogen and Oxygen production. To achieve the same amount of Hydrogen as in the case of the reference system, with minimum chamber operating temperature of 850ᵒC in static mode, the corresponding required electrical potential is calculated as 0.3V. However, practically, the Hydrogen production rate is observed to be low in comparison to the reference system at 850ᵒC at 0.3V. However, it has been shown empirically that the Hydrogen production can be enhanced and by raising the electrical potential to 0.45V. It increases the production rate to the same level as is of the reference system. Therefore, 850ᵒC and 0.45V are assigned as the Q-point of the proposed system which is actively stabilized via proportional integral derivative controllers which adjust the axial position of the lens arrays for both subsystems. The functionality of the controllers is based on maintaining the chamber fixed at 850ᵒC (minimum operating temperature) and 0.45V; Q-Point to realize the same Hydrogen production rate as-is for the reference system.Keywords: hydrogen, oxygen, thermolysis, ultraviolet
Procedia PDF Downloads 13334 Molecular Detection of E. coli in Treated Wastewater and Well Water Samples Collected from Al Riyadh Governorate, Saudi Arabia
Authors: Hanouf A. S. Al Nuwaysir, Nadine Moubayed, Abir Ben Bacha, Islem Abid
Abstract:
Consumption of waste water continues to cause significant problems for human health in both developed and developing countries. Many regulations have been implied by different world authorities controlling water quality for the presence of coliforms used as standard indicators of water quality deterioration and historically leading health protection concept. In this study, the European directive for the detection of Escherichia coli, ISO 9308-1, was applied to examine and monitor coliforms in water samples collected from Wadi Hanifa and neighboring wells, Riyadh governorate, kingdom of Saudi Arabia, which is used for irrigation and industrial purposes. Samples were taken from different locations for 8 months consecutively, chlorine concentration ranging from 0.1- 0.4 mg/l, was determined using the DPD FREE CHLORINE HACH kit. Water samples were then analyzed following the ISO protocol which relies on the membrane filtration technique (0.45µm, pore size membrane filter) and a chromogenic medium TTC, a lactose based medium used for the detection and enumeration of total coliforms and E.coli. Data showed that the number of bacterial isolates ranged from 60 to 300 colonies/100ml for well and surface water samples respectively; where higher numbers were attributed to the surface samples. Organisms which apparently ferment lactose on TTC agar plates, appearing as orange colonies, were selected and additionally cultured on EMB and MacConkey agar for a further differentiation among E.coli and coliform bacteria. Two additional biochemical tests (Cytochrome oxidase and indole from tryptophan) were also investigated to detect and differentiate the presence of E.coli from other coliforms, E. coli was identified in an average of 5 to 7colonies among 25 selected colonies.On the other hand, a more rapid, specific and sensitive analytical molecular detection namely single colony PCR was also performed targeting hha gene to sensitively detect E.coli, giving more accurate and time consuming identification of colonies considered presumptively as E.coli. Comparative methodologies, such as ultrafiltration and direct DNA extraction from membrane filters (MoBio, Grermany) were also applied; however, results were not as accurate as the membrane filtration, making it a technique of choice for the detection and enumeration of water coliforms, followed by sufficiently specific enzymatic confirmatory stage.Keywords: coliform, cytochrome oxidase, hha primer, membrane filtration, single colony PCR
Procedia PDF Downloads 318