Search results for: reconfigurable filters
93 Contribution of Spatial Teledetection to the Geological Mapping of the Imiter Buttonhole: Application to the Mineralized Structures of the Principal Corps B3 (CPB3) of the Imiter Mine (Anti-atlas, Morocco)
Authors: Bouayachi Ali, Alikouss Saida, Baroudi Zouhir, Zerhouni Youssef, Zouhair Mohammed, El Idrissi Assia, Essalhi Mourad
Abstract:
The world-class Imiter silver deposit is located on the northern flank of the Precambrian Imiter buttonhole. This deposit is formed by epithermal veins hosted in the sandstone-pelite formations of the lower complex and in the basic conglomerates of the upper complex, these veins are controlled by a regional scale fault cluster, oriented N70°E to N90°E. The present work on the contribution of remote sensing on the geological mapping of the Imiter buttonhole and application to the mineralized structures of the Principal Corps B3. Mapping on satellite images is a very important tool in mineral prospecting. It allows the localization of the zones of interest in order to orientate the field missions by helping the localization of the major structures which facilitates the interpretation, the programming and the orientation of the mining works. The predictive map also allows for the correction of field mapping work, especially the direction and dimensions of structures such as dykes, corridors or scrapings. The use of a series of processing such as SAM, PCA, MNF and unsupervised and supervised classification on a Landsat 8 satellite image of the study area allowed us to highlight the main facies of the Imite area. To improve the exploration research, we used another processing that allows to realize a spatial distribution of the alteration mineral indices, and the application of several filters on the different bands to have lineament maps.Keywords: principal corps B3, teledetection, Landsat 8, Imiter II, silver mineralization, lineaments
Procedia PDF Downloads 9592 Removal of Lead Ions from Aqueous Medium Using Devised Column Filters Packed with Chitosan from Trash Crab Shells: A Characterization Study
Authors: Charles Klein O. Gorit, Mark Tristan J. Quimque Jr., M. Cecilia V. Almeda, Concepcion M. Salvana
Abstract:
Chitosan is a promising biopolymer commonly found in crustacean shells that has plausible effects in water purification and wastewater treatment. It is a primary derivative of chitin and considered second of the most abundant biopolymer prior to cellulose. Morphological analysis had been done using Scanning Electron Microscopy with Energy Dispersive Microscopy (SEM/EDS), and due to its porous nature, it showcases a certain degree of porosity, hence, larger adsorption site of heavy metal. The Energy Dispersive Spectroscopy of the chitosan and ‘lead-bound’ chitosan, shows a relative increase of percent abundance of lead cation from 1.44% to 2.08% hence, adsorption occurs. Chitosan, as a nitrogenous polysaccharide, subjected to Fourier transform infrared spectroscopy (FTIR) analysis shows amide bands ranging from 1635.36 cm⁻¹ for amide 1 band and 1558.40 cm-1 for amide 2 band with NH stretching. For ‘lead-bound’ chitosan, the FT-IR analysis shows a change in peaks upon adsorption of Pb(II) cation. The spectrum shows broadening of OH and NH stretching band. Such observation can be attributed to the probability that the attachment of Pb(II) ions is in these functional groups. A column filter was devised with lead-bound chitosan to determine the zero point charge (pHzpc) of the biopolymer. The results show that at pH 8.34, below than the zpc level of literatures cited for lead which ranges from pH 4 to 7, favors the adsorption site of chitosan and its capability to adsorb traces amount of aqueous lead.Keywords: chitosan, biopolymer, FT-IR, SEM, zero-point charge, heavy metal, lead ions
Procedia PDF Downloads 15191 Assessment and Control for Oil Aerosol
Authors: Chane-Yu Lai, Xiang-Yu Huang
Abstract:
This study conducted an assessment of sampling result by using the new development rotation filtration device (RFD) filled with porous media filters integrating the method of cyclone centrifugal spins. The testing system established for the experiment used corn oil and potassium sodium tartrate tetrahydrate (PST) as challenge aerosols and were produced by using an Ultrasonic Atomizing Nozzle, a Syringe Pump, and a Collison nebulizer. The collection efficiency of RFD for oil aerosol was assessed by using an Aerodynamic Particle Sizer (APS) and a Fidas® Frog. The results of RFD for the liquid particles condition indicated the cutoff size was 1.65 µm and 1.02 µm for rotation of 0 rpm and 9000 rpm, respectively, under an 80 PPI (pores per inch)foam with a thickness of 80 mm, and sampling velocity of 13.5 cm/s. As the experiment increased the foam thickness of RFD, the cutoff size reduced from 1.62 µm to 1.02 µm. However, when increased the foam porosity of RFD, the cutoff size reduced from 1.26 µm to 0.96 µm. Moreover, as increased the sampling velocity of RFD, the cutoff size reduced from 1.02 µm to 0.76 µm. These discrepancies of above cutoff sizes of RFD all had statistical significance (P < 0.05). The cutoff size of RFD for three experimental conditions of generated liquid oil particles, solid PST particles or both liquid oil and solid PST particles was 1.03 µm, 1.02 µm, or 0.99 µm, respectively, under a 80 PPI foam with thickness of 80 mm, rotation of 9000 rpm, and sampling velocity of 13.5 cm/s. In addition, under the best condition of the experiment, two hours of sampling loading, the RFD had better collection efficiency for particle diameter greater than 0.45 µm, under a 94 PPI nickel mesh with a thickness of 68 mm, rotation of 9000 rpm, and sampling velocity of 108.3 cm/s. The experiment concluded that increased the thickness of porous media, face velocity, and porosity of porous media of RFD could increase the collection efficiency of porous media for sampling oil particles. Moreover, increased the rotation speed of RFD also increased the collection efficiency for sampling oil particles. Further investigation is required for those above operation parameters for RFD in this study in the future.Keywords: oil aerosol, porous media filter, rotation, filtration
Procedia PDF Downloads 40490 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface
Authors: Ping Tan, Xiaomeng Su, Yi Shen
Abstract:
The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean
Procedia PDF Downloads 12689 A Deleuzean Feminist Analysis of the Everyday, Gendered Performances of Teen Femininity: A Case Study on Snaps and Selfies in East London
Authors: Christine Redmond
Abstract:
This paper contributes to research on gendered, digital identities by exploring how selfies offer scope for disrupting and moving through gendered and racial ideals of feminine beauty. The selfie involves self-presentation, filters, captions, hashtags, online publishing, likes and more, constituting the relationship between subjectivity, practice and social use of selfies a complex process. Employing qualitative research methods on youth selfies in the UK, the author investigates interdisciplinary entangling between studies of social media and fields within gender, media and cultural studies, providing a material discursive treatment of the selfie as an embodied practice. Drawing on data collected from focus groups with teenage girls in East London, the study explores how girls experience and relate to selfies and snaps in their everyday lives. The author’s Deleuzean feminist approach suggests that bodies and selfies are not individual, disembodied entities between which there is a mediating inter-action. Instead, bodies and selfies are positioned as entangled to a point where it becomes unclear as to where a selfie ends and a body begins. Recognising selfies not just as images but as material and social assemblages opens up possibilities for unpacking the selfie in ways that move beyond the representational model in some studies of socially mediated digital images. The study reveals how the selfie functions to enable moments of empowerment within limiting, dominant ideologies of Euro-centrism, patriarchy and heteronormativity.Keywords: affect theory, femininity, gender, heteronormativity, photography, selfie, snapchat
Procedia PDF Downloads 24788 Application of Exhaust Gas-Air Brake System in Petrol and Diesel Engine
Authors: Gurlal Singh, Rupinder Singh
Abstract:
The possible role of the engine brake is to convert a power-producing engine into a power-absorbing retarding mechanism. In this braking system, exhaust gas (EG) from the internal combustion (IC) engines is used to operate air brake in the automobiles. Airbrake is most used braking system in vehicles. In the proposed model, instead of air brake, EG is used to operate the brake lever and stored in a specially designed tank. This pressure of EG is used to operate the pneumatic cylinder and brake lever. Filters used to remove the impurities from the EG, then it is allowed to store in the tank. Pressure relief valve is used to achieve a specific pressure in the tank and helps to avoid further damage to the tank as well as in an engine. The petrol engine is used in the proposed EG braking system. The petrol engine is chosen initially because it produces less impurity in the exhaust than diesel engines. Moreover, exhaust brake system (EBS) for the Diesel engines is composed of gate valve, pneumatic cylinder and exhaust brake valve with the on-off solenoid. Exhaust brake valve which is core component of EBS should have characteristics such as high reliability and long life. In a diesel engine, there is butterfly valve in exhaust manifold connected with solenoid switch which is used to on and off the butterfly valve. When butterfly valve closed partially, then the pressure starts built up inside the exhaust manifold and cylinder that actually resist the movement of piston leads to crankshaft getting stops resulting stopping of the flywheel. It creates breaking effect in a diesel engine. The exhaust brake is a supplementary breaking system to the service brake. It is noted that exhaust brake increased 2-3 fold the life of service brake may be due to the creation of negative torque which retards the speed of the engine. More study may also be warranted for the best suitable design of exhaust brake in a diesel engine.Keywords: exhaust gas, automobiles, solenoid, airbrake
Procedia PDF Downloads 26087 Development of an Aerosol Protection Capsule for Patients with COVID-19
Authors: Isomar Lima da Silva, Aristeu Jonatas Leite de Oliveira, Roberto Maia Augusto
Abstract:
Biological isolation capsules are equipment commonly used in the control and prevention of infectious diseases in the hospital environment. This type of equipment, combined with pre-established medical protocols, contributes significantly to the containment of highly transmissible pathogens such as COVID-19. Due to its hermetic isolation, it allows more excellent patient safety, protecting companions and the health team. In this context, this work presents the development, testing, and validation of a medical capsule to treat patients affected by COVID-19. To this end, requirements such as low cost and easy handling were considered to meet the demand of people infected with the virus in remote locations in the Amazon region and/or where there are no ICU beds and mechanical ventilators for orotracheal intubation. Conceived and developed in a partnership between SAMEL Planos de Saúde and Instituto Conecthus, the device entitled "Vanessa Capsule" was designed to be used together with the NIV protocol (non-invasive ventilation), has an automatic exhaust system and filters performing the CO2 exchange, in addition to having BiPaps ventilatory support equipment (mechanical fans) in the Cabin Kit. The results show that the degree of effectiveness in protecting against infection by aerosols, with the protection cabin, is satisfactory, implying the consideration of the Vanessa capsule as an auxiliary method to be evaluated by the health team. It should also be noted that the medical observation of the evaluated patients found that the treatment against the COVID-19 virus started earlier with non-invasive mechanical ventilation reduces the patient's suffering and contributes positively to their recovery, in association with isolation through the Vanessa capsule.Keywords: COVID-19, mechanical ventilators, medical capsule, non-invasive ventilation
Procedia PDF Downloads 8486 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform
Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier
Abstract:
The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing
Procedia PDF Downloads 19685 Solar-Blind Ni-Schottky Photodetector Based on MOCVD Grown ZnGa₂O₄
Authors: Taslim Khan, Ray Hua Horng, Rajendra Singh
Abstract:
This study presents a comprehensive analysis of the design, fabrication, and performance evaluation of a solar-blind Schottky photodetector based on ZnGa₂O₄ grown via MOCVD, utilizing Ni/Au as the Schottky electrode. ZnGa₂O₄, with its wide bandgap of 5.2 eV, is well-suited for high-performance solar-blind photodetection applications. The photodetector demonstrates an impressive responsivity of 280 A/W, indicating its exceptional sensitivity within the solar-blind ultraviolet band. One of the device's notable attributes is its high rejection ratio of 10⁵, which effectively filters out unwanted background signals, enhancing its reliability in various environments. The photodetector also boasts a photodetector responsivity contrast ratio (PDCR) of 10⁷, showcasing its ability to detect even minor changes in incident UV light. Additionally, the device features an outstanding detective of 10¹⁸ Jones, underscoring its capability to precisely detect faint UV signals. It exhibits a fast response time of 80 ms and an ON/OFF ratio of 10⁵, making it suitable for real-time UV sensing applications. The noise-equivalent power (NEP) of 10^-17 W/Hz further highlights its efficiency in detecting low-intensity UV signals. The photodetector also achieves a high forward-to-backward current rejection ratio of 10⁶, ensuring high selectivity. Furthermore, the device maintains an extremely low dark current of approximately 0.1 pA. These findings position the ZnGa₂O₄-based Schottky photodetector as a leading candidate for solar-blind UV detection applications. It offers a compelling combination of sensitivity, selectivity, and operational efficiency, making it a highly promising tool for environments requiring precise and reliable UV detection.Keywords: wideband gap, solar blind photodetector, MOCVD, zinc gallate
Procedia PDF Downloads 3984 Phylogenetic Differential Separation of Environmental Samples
Authors: Amber C. W. Vandepoele, Michael A. Marciano
Abstract:
Biological analyses frequently focus on single organisms, however many times, the biological sample consists of more than the target organism; for example, human microbiome research targets bacterial DNA, yet most samples consist largely of human DNA. Therefore, there would be an advantage to removing these contaminating organisms. Conversely, some analyses focus on a single organism but would greatly benefit from the additional information regarding the other organismal components of the sample. Forensic analysis is one such example, wherein most forensic casework, human DNA is targeted; however, it typically exists in complex non-pristine sample substrates such as soil or unclean surfaces. These complex samples are commonly comprised of not just human tissue but also microbial and plant life, where these organisms may help gain more forensically relevant information about a specific location or interaction. This project aims to optimize a ‘phylogenetic’ differential extraction method that will separate mammalian, bacterial and plant cells in a mixed sample. This is accomplished through the use of size exclusion separation, whereby the different cell types are separated through multiple filtrations using 5 μm filters. The components are then lysed via differential enzymatic sensitivities among the cells and extracted with minimal contribution from the preceding component. This extraction method will then allow complex DNA samples to be more easily interpreted through non-targeting sequencing since the data will not be skewed toward the smaller and usually more numerous bacterial DNAs. This research project has demonstrated that this ‘phylogenetic’ differential extraction method successfully separated the epithelial and bacterial cells from each other with minimal cell loss. We will take this one step further, showing that when adding the plant cells into the mixture, they will be separated and extracted from the sample. Research is ongoing, and results are pending.Keywords: DNA isolation, geolocation, non-human, phylogenetic separation
Procedia PDF Downloads 11283 Energy Production with Closed Methods
Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani
Abstract:
In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.Keywords: energy, heating, atmosphere, waste, gasification
Procedia PDF Downloads 23582 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 34881 A Wearable Fluorescence Imaging Device for Intraoperative Identification of Human Brain Tumors
Authors: Guoqiang Yu, Mehrana Mohtasebi, Jinghong Sun, Thomas Pittman
Abstract:
Malignant glioma (MG) is the most common type of primary malignant brain tumor. Surgical resection of MG remains the cornerstone of therapy, and the extent of resection correlates with patient survival. A limiting factor for resection, however, is the difficulty in differentiating the tumor from normal tissue during surgery. Fluorescence imaging is an emerging technique for real-time intraoperative visualization of MGs and their boundaries. However, most clinical-grade neurosurgical operative microscopes with fluorescence imaging ability are hampered by low adoption rates due to high cost, limited portability, limited operation flexibility, and lack of skilled professionals with technical knowledge. To overcome the limitations, we innovatively integrated miniaturized light sources, flippable filters, and a recording camera to the surgical eye loupes to generate a wearable fluorescence eye loupe (FLoupe) device for intraoperative imaging of fluorescent MGs. Two FLoupe prototypes were constructed for imaging of Fluorescein and 5-aminolevulinic acid (5-ALA), respectively. The wearable FLoupe devices were tested on tumor-simulating phantoms and patients with MGs. Comparable results were observed against the standard neurosurgical operative microscope (PENTERO® 900) with fluorescence kits. The affordable and wearable FLoupe devices enable visualization of both color and fluorescence images with the same quality as the large and expensive stationary operative microscopes. The wearable FLoupe device allows for a greater range of movement, less obstruction, and faster/easier operation. Thus, it reduces surgery time and is more easily adapted to the surgical environment than unwieldy neurosurgical operative microscopes.Keywords: fluorescence guided surgery, malignant glioma, neurosurgical operative microscope, wearable fluorescence imaging device
Procedia PDF Downloads 6680 Flocculation and Settling Rate Studies of Clean Coal Fines at Different Flocculants Dosage, pH Values, Bulk Density and Particle Size
Authors: Patel Himeshkumar Ashokbhai, Suchit Sharma, Arvind Kumar Garg
Abstract:
The results obtained from settling test of coal fines are used as an important tool to select the dewatering equipment such as thickeners, centrifuges and filters. Coal being hydrophobic in nature does not easily settle when mixed with water. Coal slurry that takes longer time to release water is highly undesirable because it poses additional challenge during sedimentation, centrifuge and filtration. If filter cake has higher than permitted moisture content then it not only creates handling problems but inflated freight costs and reduction in input and productivity for coke oven charges. It is to be noted that coal fines drastically increase moisture percentage in filter cake hence are to be minimized. To increase settling rate of coal fines in slurry chemical substances called flocculants or coagulants are added that cause coal particles to flocculate or coalesce into larger particles. These larger particles settle at faster rate and have higher settling velocity. Other important factors affecting settling rate are flocculent dosage, slurry or pulp density and particle size. Hence in this paper we tried to study the settling characteristic of clean coal fines by varying one of the four factors namely 1. Flocculant Dosage (acryl-amide) 2. pH of the water 3. Bulk density 4. Particle size of clean coal fines in settling experiment and drew important conclusions. Result of this paper will be much useful not only for coal beneficiation plant design but also for cost reduction of coke production facilities.Keywords: bulk density, coal fines, flocculants, flocculation, settling velocity, pH
Procedia PDF Downloads 32379 Before Decision: Career Motivation of Teacher Candidates
Authors: Pál Iván Szontagh
Abstract:
We suppose that today, the motivation for the career of a pedagogue (including its existential, organizational and infrastructural conditions) is different from the level of commitment to the profession of an educator (which can be experienced informally, or outside of the public education system). In our research, we made efforts to address the widest possible range of student elementary teachers, and to interpret their responses using different filters. In the first phase of our study, we analyzed first-year kindergarten teacher students’ career motivation and commitment to the profession, and in the second phase, that of final-year kindergarten teacher candidates. In the third phase, we conducted surveys to explore students’ motivation for the profession and the career path of a pedagogue in four countries of the Carpathian Basin (Hungary, Slovakia, Romania and Serbia). The surveys were conducted in 17 campuses of 11 Hungarian teacher’s training colleges and universities. Finally, we extended the survey to practicing graduates preparing for their on-the-job rating examination. Based on our results, in all breakdowns, regardless of age group, training institute or - in part - geographical location and nationality, it is proven that lack of social- and financial esteem of the profession poses serious risks for recruitment and retention of teachers. As a summary, we searched for significant differences between the professional- and career motivations of the three respondent groups (kindergarten teacher students, elementary teacher students and practicing teachers), i.e. the motivation factors that change the most with education and/or with the time spent on the job. Based on our results, in all breakdowns, regardless of age group, training institute or - in part - geographical location and nationality, it is proven that lack of social- and financial esteem of the profession poses serious risks for recruitment and retention of teachers.Keywords: career motivation, career socialization, professional motivation, teacher training
Procedia PDF Downloads 13778 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances
Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim
Abstract:
This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering
Procedia PDF Downloads 18677 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 39776 The Impact of Dust Storm Events on the Chemical and Toxicological Characteristics of Ambient Particulate Matter in Riyadh, Saudi Arabia
Authors: Abdulmalik Altuwayjiri, Milad Pirhadi, Mohammed Kalafy, Badr Alharbi, Constantinos Sioutas
Abstract:
In this study, we investigated the chemical and toxicological characteristics of PM10 in the metropolitan area of Riyadh, Saudi Arabia. PM10 samples were collected on quartz and teflon filters during cold (December 2019–April 2020) and warm (May 2020–August 2020) seasons, including dust and non-dust events. The PM10 constituents were chemically analyzed for their metal, inorganic ions, and elemental and organic carbon (EC/OC) contents. Additionally, the PM10 oxidative potential was measured by means of the dithiothreitol (DTT) assay. Our findings revealed that the oxidative potential of the collected ambient PM10 samples was significantly higher than those measured in many urban areas worldwide. The oxidative potential of the collected ambient PM¹⁰⁻ samples was also higher during dust episodes compared to non-dust events, mainly due to higher concentrations of metals during these events. We performed Pearson correlation analysis, principal component analysis (PCA), and multi-linear regression (MLR) to identify the most significant sources contributing to the toxicity of PM¹⁰⁻ The results of the MLR analyses indicated that the major pollution sources contributing to the oxidative potential of ambient PM10 were soil and resuspended dust emissions (identified by Al, K, Fe, and Li) (31%), followed by secondary organic aerosol (SOA) formation (traced by SO₄-² and NH+₄) (20%), and industrial activities (identified by Se and La) (19%), and traffic emissions (characterized by EC, Zn, and Cu) (17%). Results from this study underscore the impact of transported dust emissions on the oxidative potential of ambient PM10 in Riyadh and can be helpful in adopting appropriate public health policies regarding detrimental outcomes of exposure to PM₁₀-Keywords: ambient PM10, oxidative potential, source apportionment, Riyadh, dust episodes
Procedia PDF Downloads 17275 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 9374 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow
Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen
Abstract:
Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics
Procedia PDF Downloads 19073 Single Tuned Shunt Passive Filter Based Current Harmonic Elimination of Three Phase AC-DC Converters
Authors: Mansoor Soomro
Abstract:
The evolution of power electronic equipment has been pivotal in making industrial processes productive, efficient and safe. Despite its attractive features, it has been due to nonlinear loads which make it vulnerable to power quality conditions. Harmonics is one of the power quality problem in which the harmonic frequency is integral multiple of supply frequency. Therefore, the supply voltage and supply frequency do not last within their tolerable limits. As a result, distorted current and voltage waveform may appear. Attributes of low power quality confirm that an electrical device or equipment is likely to malfunction, fail promptly or unable to operate under all applied conditions. The electrical power system is designed for delivering power reliably, namely maximizing power availability to customers. However, power quality events are largely untracked, and as a result, can take out a process as many as 20 to 30 times a year, costing utilities, customers and suppliers of load equipment, a loss of millions of dollars. The ill effects of current harmonics reduce system efficiency, cause overheating of connected equipment, result increase in electrical power and air conditioning costs. With the passage of time and the rapid growth of power electronic converters has highlighted the damages of current harmonics in the electrical power system. Therefore, it has become essential to address the bad influence of current harmonics while planning any suitable changes in the electrical installations. In this paper, an effort has been made to mitigate the effects of dominant 3rd order current harmonics. Passive filtering technique with six pulse multiplication converter has been employed to mitigate them. Since, the standards of power quality are to maintain the supply voltage and supply current within certain prescribed standard limits. For this purpose, the obtained results are validated as per specifications of IEEE 519-1992 and IEEE 519-2014 performance standards.Keywords: current harmonics, power quality, passive filters, power electronic converters
Procedia PDF Downloads 30172 Spherical Organic Particle (SOP) Emissions from Fixed-Bed Residential Coal-Burning Devices
Authors: Tafadzwa Makonese, Harold Annegarn, Patricia Forbes
Abstract:
Residential coal combustion is one of the largest sources of carbonaceous aerosols in the Highveld region of South Africa, significantly affecting the local and regional climate. In this study, we investigated single coal burning particles emitted when using different fire-ignition techniques (top-lit up-draft vs bottom-lit up-draft) and air ventilation rates (defined by the number of air holes above and below the fire grate) in selected informal braziers. Aerosol samples were collected on nucleopore filters at the SeTAR Centre Laboratory, University of Johannesburg. Individual particles (~700) were investigated using a scanning electron microscope equipped with an energy-dispersive X-ray spectroscopy (EDS). Two distinct forms of spherical organic particles (SOPs) were identified, one less oxidized than the other. The particles were further classified into "electronically" dark and bright, according to China et al. [2014]. EDS analysis showed that 70% of the dark spherical organic particles balls had higher (~60%) relative oxygen content than in the bright SOPs. We quantify the morphology of spherical organic particles and classify them into four categories: ~50% are bare single particles; ~35% particles are aggregated and form diffusion accretion chains; 10% have inclusions; and 5% are deformed due to impaction on filter material during sampling. We conclude that there are two distinct kinds of coal burning spherical organic particles and that dark SOPs are less volatile than bright SOPs. We also show that these spherical organic particles are similar in nature and characteristics to tar balls observed in biomass combustion, and that they have the potential to absorb sunlight thereby affecting the earth’s radiative budget and climate. This study provides insights on the mixing states, morphology, and possible formation mechanisms of these organic particles from residential coal combustion in informal stoves.Keywords: spherical organic particles, residential coal combustion, fixed-bed, aerosols, morphology, stoves
Procedia PDF Downloads 46771 Myosin-Driven Movement of Nanoparticles – An Approach to High-Speed Tracking
Authors: Sneha Kumari, Ravi Krishnan Elangovan
Abstract:
This abstract describes the development of a high-speed tracking method by modification in motor components for nanoparticle attachment. Myosin motors are nano-sized protein machines powering movement that defines life. These miniature molecular devices serve as engines utilizing chemical energy stored in ATP to produce useful mechanical energy in the form of a few nanometre displacement events leading to force generation that is required for cargo transport, cell division, cell locomotion, translated to macroscopic movements like running etc. With the advent of in vitro motility assay (IVMA), detailed functional studies of the actomyosin system could be performed. The major challenge with the currently available IVMA for tracking actin filaments is a resolution limitation of ± 50nm. To overcome this, we are trying to develop Single Molecule IVMA in which nanoparticle (GNP/QD) will be attached along or on the barbed end of actin filaments using CapZ protein and visualization by a compact TIRF module called ‘cTIRF’. The waveguide-based illumination by cTIRF offers a unique separation of excitation and collection optics, enabling imaging by scattering without emission filters. So, this technology is well equipped to perform tracking with high precision in temporal resolution of 2ms with significantly improved SNR by 100-fold as compared to conventional TIRF. Also, the nanoparticles (QD/GNP) attached to actin filament act as a point source of light coffering ease in filament tracking compared to conventional manual tracking. Moreover, the attachment of cargo (QD/GNP) to the thin filament paves the way for various nano-technological applications through their transportation to different predetermined locations on the chipKeywords: actin, cargo, IVMA, myosin motors and single-molecule system
Procedia PDF Downloads 8770 Improvements in Transient Testing in The Transient REActor Test (TREAT) with a Choice of Filter
Authors: Harish Aryal
Abstract:
The safe and reliable operation of nuclear reactors has always been one of the topmost priorities in the nuclear industry. Transient testing allows us to understand the time-dependent behavior of the neutron population in response to either a planned change in the reactor conditions or unplanned circumstances. These unforeseen conditions might occur due to sudden reactivity insertions, feedback, power excursions, instabilities, and accidents. To study such behavior, we need transient testing, which is like car crash testing, to estimate the durability and strength of a car design. In nuclear designs, such transient testing can simulate a wide range of accidents due to sudden reactivity insertions and helps to study the feasibility and integrity of the fuel to be used in certain reactor types. This testing involves a high neutron flux environment and real-time imaging technology with advanced instrumentation with appropriate accuracy and resolution to study the fuel slumping behavior. With the aid of transient testing and adequate imaging tools, it is possible to test the safety basis for reactor and fuel designs that serves as a gateway in licensing advanced reactors in the future. To that end, it is crucial to fully understand advanced imaging techniques both analytically and via simulations. This paper presents an innovative method of supporting real-time imaging of fuel pins and other structures during transient testing. The major fuel-motion detection device that is studied in this dissertation is the Hodoscope which requires collimators. This paper provides 1) an MCNP model and simulation of a Transient Reactor Test (TREAT) core with a central fuel element replaced by a slotted fuel element that provides an open path between test samples and a hodoscope detector and 2) a choice of good filter to improve image resolution.Keywords: hodoscope, transient testing, collimators, MCNP, TREAT, hodogram, filters
Procedia PDF Downloads 7769 Detection of Cryptosporidium Oocysts by Acid-Fast Staining Method and PCR in Surface Water from Tehran, Iran
Authors: Mohamad Mohsen Homayouni, Niloofar Taghipour, Ahmad Reza Memar, Niloofar Khalaji, Hamed Kiani, Seyyed Javad Seyyed Tabaei
Abstract:
Background and Objective: Cryptosporidium is a coccidian protozoan parasite; its oocysts in surface water are a global health problem. Due to the low number of parasites in the water resources and the lack of laboratory culture, rapid and sensitive method for detection of the organism in the water resources is necessarily required. We applied modified acid-fast staining and PCR for the detection of the Cryptosporidium spp. and analysed the genotypes in 55 samples collected from surface water. Methods: Over a period of nine months, 55 surface water samples were collected from the five rivers in Tehran, Iran. The samples were filtered by using cellulose acetate membrane filters. By acid fast method, initial identification of Cryptosporidium oocyst were carried out on surface water samples. Then, nested PCR assay was designed for the specific amplification and analysed the genotypes. Results: Modified Ziehl-Neelsen method revealed 5–20 Cryptosporidium oocysts detected per 10 Liter. Five out of the 55 (9.09%) surface water samples were found positive for Cryptosporidium spp. by Ziehl-Neelsen test and seven (12.7%) were found positive by nested PCR. The staining results were consistent with PCR. Seven Cryptosporidium PCR products were successfully sequenced and five gp60 subtypes were detected. Our finding of gp60 gene revealed that all of the positive isolates were Cryptosporidium parvum and belonged to subtype families IIa and IId. Conclusion: Our investigations were showed that collection of water samples were contaminated by Cryptosporidium, with potential hazards for the significant health problem. This study provides the first report on detection and genotyping of Cryptosporidium species from surface water samples in Iran, and its result confirmed the low clinical incidence of this parasite on the community.Keywords: Cryptosporidium spp., membrane filtration, subtype, surface water, Iran
Procedia PDF Downloads 41668 Environmental Impact Assessment in Mining Regions with Remote Sensing
Authors: Carla Palencia-Aguilar
Abstract:
Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.Keywords: carbon dioxide, NPP, MODIS, MINING
Procedia PDF Downloads 10467 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms
Authors: Farhat Imtiaz, Umar Farooq
Abstract:
In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation
Procedia PDF Downloads 13566 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter
Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar
Abstract:
Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).Keywords: filter media, hydraulic loading rate, residence time distribution, tracer
Procedia PDF Downloads 27765 Machine Learning for Exoplanetary Habitability Assessment
Authors: King Kumire, Amos Kubeka
Abstract:
The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.Keywords: machine-learning, habitability, exoplanets, supercomputing
Procedia PDF Downloads 9064 Machine Learning for Exoplanetary Habitability Assessment
Authors: King Kumire, Amos Kubeka
Abstract:
The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far, has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.Keywords: exoplanets, habitability, machine-learning, supercomputing
Procedia PDF Downloads 118