Search results for: microwave processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3984

Search results for: microwave processing

3504 Impact of Surface Roughness on Light Absorption

Authors: V. Gareyan, Zh. Gevorkian

Abstract:

We study oblique incident light absorption in opaque media with rough surfaces. An analytical approach with modified boundary conditions taking into account the surface roughness in metallic or dielectric films has been discussed. Our approach reveals interference-linked terms that modify the absorption dependence on different characteristics. We have discussed the limits of our approach that hold valid from the visible to the microwave region. Polarization and angular dependences of roughness-induced absorption are revealed. The existence of an incident angle or a wavelength for which the absorptance of a rough surface becomes equal to that of a flat surface is predicted. Based on this phenomenon, a method of determining roughness correlation length is suggested.

Keywords: light, absorption, surface, roughness

Procedia PDF Downloads 36
3503 Efficient Modeling Technique for Microstrip Discontinuities

Authors: Nassim Ourabia, Malika Ourabia

Abstract:

A new and efficient method is presented for the analysis of arbitrarily shaped discontinuities. The technique obtains closed form expressions for the equivalent circuits which are used to model these discontinuities. Then it would be easy to handle and to characterize complicated structures like T and Y junctions, truncated junctions, arbitrarily shaped junctions, cascading junctions, and more generally planar multiport junctions. Another advantage of this method is that the edge line concept for arbitrary shape junctions operates with real parameters circuits. The validity of the method was further confirmed by comparing our results for various discontinuities (bend, filters) with those from HFSS as well as from other published sources.

Keywords: CAD analysis, contour integral approach, microwave circuits, s-parameters

Procedia PDF Downloads 497
3502 The Impact of Varying the Detector and Modulation Types on Inter Satellite Link (ISL) Realizing the Allowable High Data Rate

Authors: Asmaa Zaki M., Ahmed Abd El Aziz, Heba A. Fayed, Moustafa H. Aly

Abstract:

ISLs are the most popular choice for deep space communications because these links are attractive alternatives to present day microwave links. This paper explored the allowable high data rate in this link over different orbits, which is affected by variation in modulation scheme and detector type. Moreover, the objective of this paper is to optimize and analyze the performance of ISL in terms of Q-factor and Minimum Bit Error Rate (Min-BER) based on different detectors comprising some parameters.

Keywords: free space optics (FSO), field of view (FOV), inter satellite link (ISL), optical wireless communication (OWC)

Procedia PDF Downloads 379
3501 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 37
3500 Spatio-Temporal Dynamics of Snow Cover and Melt/Freeze Conditions in Indian Himalayas

Authors: Rajashree Bothale, Venkateswara Rao

Abstract:

Indian Himalayas also known as third pole with 0.9 Million SQ km area, contain the largest reserve of ice and snow outside poles and affect global climate and water availability in the perennial rivers. The variations in the extent of snow are indicative of climate change. The snow melt is sensitive to climate change (warming) and also an influencing factor to the climate change. A study of the spatio-temporal dynamics of snow cover and melt/freeze conditions is carried out using space based observations in visible and microwave bands. An analysis period of 2003 to 2015 is selected to identify and map the changes and trend in snow cover using Indian Remote Sensing (IRS) Advanced Wide Field Sensor (AWiFS) and Moderate Resolution Imaging Spectroradiometer(MODIS) data. For mapping of wet snow, microwave data is used, which is sensitive to the presence of liquid water in the snow. The present study uses Ku-band scatterometer data from QuikSCAT and Oceansat satellites. The enhanced resolution images at 2.25 km from the 13.6GHz sensor are used to analyze the backscatter response to dry and wet snow for the period of 2000-2013 using threshold method. The study area is divided into three major river basins namely Brahmaputra, Ganges and Indus which also represent the diversification in Himalayas as the Eastern Himalayas, Central Himalayas and Western Himalayas. Topographic variations across different zones show that a majority of the study area lies in 4000–5500 m elevation range and the maximum percent of high elevated areas (>5500 m) lies in Western Himalayas. The effect of climate change could be seen in the extent of snow cover and also on the melt/freeze status in different parts of Himalayas. Melt onset day increases from east (March11+11) to west (May12+15) with large variation in number of melt days. Western Himalayas has shorter melt duration (120+15) in comparison to Eastern Himalayas (150+16) providing lesser time for melt. Eastern Himalaya glaciers are prone for enhanced melt due to large melt duration. The extent of snow cover coupled with the status of melt/freeze indicating solar radiation can be used as precursor for monsoon prediction.

Keywords: Indian Himalaya, Scatterometer, Snow Melt/Freeze, AWiFS, Cryosphere

Procedia PDF Downloads 241
3499 Neural Rendering Applied to Confocal Microscopy Images

Authors: Daniel Li

Abstract:

We present a novel application of neural rendering methods to confocal microscopy. Neural rendering and implicit neural representations have developed at a remarkable pace, and are prevalent in modern 3D computer vision literature. However, they have not yet been applied to optical microscopy, an important imaging field where 3D volume information may be heavily sought after. In this paper, we employ neural rendering on confocal microscopy focus stack data and share the results. We highlight the benefits and potential of adding neural rendering to the toolkit of microscopy image processing techniques.

Keywords: neural rendering, implicit neural representations, confocal microscopy, medical image processing

Procedia PDF Downloads 641
3498 Temperature Calculation for an Atmospheric Pressure Plasma Jet by Optical Emission Spectroscopy

Authors: H. Lee, Jr., L. Bo-ot, R. Tumlos, H. Ramos

Abstract:

The objective of the study is to be able to calculate excitation and vibrational temperatures of a 2.45 GHz microwave-induced atmospheric pressure plasma jet. The plasma jet utilizes Argon gas as a primary working gas, while Nitrogen is utilized as a shroud gas for protecting the quartz tube from the plasma discharge. Through Optical Emission Spectroscopy (OES), various emission spectra were acquired from the plasma discharge. Selected lines from Ar I and N2 I emissions were used for the Boltzmann plot technique. The Boltzmann plots yielded values for the excitation and vibrational temperatures. The various values for the temperatures were plotted against varying parameters such as the gas flow rates.

Keywords: plasma jet, OES, Boltzmann plots, vibrational temperatures

Procedia PDF Downloads 694
3497 Vision Aided INS for Soft Landing

Authors: R. Sri Karthi Krishna, A. Saravana Kumar, Kesava Brahmaji, V. S. Vinoj

Abstract:

The lunar surface may contain rough and non-uniform terrain with dips and peaks. Soft-landing is a method of landing the lander on the lunar surface without any damage to the vehicle. This project focuses on finding a safe landing site for the vehicle by developing a method for the lateral velocity determination of the lunar lander. This is done by processing the real time images obtained by means of an on-board vision sensor. The hazard avoidance phase of the soft-landing starts when the vehicle is about 200 m above the lunar surface. Here, the lander has a very low velocity of about 10 cm/s:vertical and 5 m/s:horizontal. On the detection of a hazard the lander is navigated by controlling the vertical and lateral velocity. In order to find an appropriate landing site and to accordingly navigate, the lander image processing is performed continuously. The images are taken continuously until the landing site is determined, and the lander safely lands on the lunar surface. By integrating this vision-based navigation with the INS a better accuracy for the soft-landing of the lunar lander can be obtained.

Keywords: vision aided INS, image processing, lateral velocity estimation, materials engineering

Procedia PDF Downloads 442
3496 Design of a 3-dB Directional Coupler Using Symmetric Coupled-Lines

Authors: Cem Çindaş, Serkan Şimşek

Abstract:

In this paper, the study and design of a 3-dB 90° directional coupler operating in the S-band is proposed. The coupler employs symmetrical multi-section coupled lines designed in a stripline technique. Design is realized in AWR Design Environment and CST Microwave Studio. Using these two programs played a key role in attaining outcomes swiftly and precisely. The simulation results show that the coupler maintains amplitude consistency within ± 0.3 dB, isolation and reflection losses better than 16 dB, and phase difference between two output ports of 88º±0.6˚ in the 1.7 – 4.35 GHz range. This simulation results indicate an improvement is achieved in fractional bandwidth (FBW) performance around the center frequency of f0 = 3 GHz.

Keywords: coupled stripline, directional coupler, multi-section coupler, symmetrical coupler

Procedia PDF Downloads 51
3495 The Output Fallacy: An Investigation into Input, Noticing, and Learners’ Mechanisms

Authors: Samantha Rix

Abstract:

The purpose of this research paper is to investigate the cognitive processing of learners who receive input but produce very little or no output, and who, when they do produce output, exhibit a similar language proficiency as do those learners who produced output more regularly in the language classroom. Previous studies have investigated the benefits of output (with somewhat differing results); therefore, the presentation will begin with an investigation of what may underlie gains in proficiency without output. Consequently, a pilot study was designed and conducted to gain insight into the cognitive processing of low-output language learners looking, for example, at quantity and quality of noticing. This will be carried out within the paradigm of action classroom research, observing and interviewing low-output language learners in an intensive English program at a small Midwest university. The results of the pilot study indicated that autonomy in language learning, specifically utilizing strategies such self-monitoring, self-talk, and thinking 'out-loud', were crucial in the development of language proficiency for academic-level performance. The presentation concludes with an examination of pedagogical implication for classroom use in order to aide students in their language development.

Keywords: cognitive processing, language learners, language proficiency, learning strategies

Procedia PDF Downloads 454
3494 Analyzing the Risk Based Approach in General Data Protection Regulation: Basic Challenges Connected with Adapting the Regulation

Authors: Natalia Kalinowska

Abstract:

The adoption of the General Data Protection Regulation, (GDPR) finished the four-year work of the European Commission in this area in the European Union. Considering far-reaching changes, which will be applied by GDPR, the European legislator envisaged two-year transitional period. Member states and companies have to prepare for a new regulation until 25 of May 2018. The idea, which becomes a new look at an attitude to data protection in the European Union is risk-based approach. So far, as a result of implementation of Directive 95/46/WE, in many European countries (including Poland) there have been adopted very particular regulations, specifying technical and organisational security measures e.g. Polish implementing rules indicate even how long password should be. According to the new approach from May 2018, controllers and processors will be obliged to apply security measures adequate to level of risk associated with specific data processing. The risk in GDPR should be interpreted as the likelihood of a breach of the rights and freedoms of the data subject. According to Recital 76, the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. GDPR does not indicate security measures which should be applied – in recitals there are only examples such as anonymization or encryption. It depends on a controller’s decision what type of security measures controller considered as sufficient and he will be responsible if these measures are not sufficient or if his identification of risk level is incorrect. Data protection regulation indicates few levels of risk. Recital 76 indicates risk and high risk, but some lawyers think, that there is one more category – low risk/now risk. Low risk/now risk data processing is a situation when it is unlikely to result in a risk to the rights and freedoms of natural persons. GDPR mentions types of data processing when a controller does not have to evaluate level of risk because it has been classified as „high risk” processing e.g. processing on a large scale of special categories of data, processing with using new technologies. The methodology will include analysis of legal regulations e.g. GDPR, the Polish Act on the Protection of personal data. Moreover: ICO Guidelines and articles concerning risk based approach in GDPR. The main conclusion is that an appropriate risk assessment is a key to keeping data safe and avoiding financial penalties. On the one hand, this approach seems to be more equitable, not only for controllers or processors but also for data subjects, but on the other hand, it increases controllers’ uncertainties in the assessment which could have a direct impact on incorrect data protection and potential responsibility for infringement of regulation.

Keywords: general data protection regulation, personal data protection, privacy protection, risk based approach

Procedia PDF Downloads 237
3493 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO

Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky

Abstract:

The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.

Keywords: aeronautics, big data, data processing, machine learning, S1000D

Procedia PDF Downloads 108
3492 Wasteless Solid-Phase Method for Conversion of Iron Ores Contaminated with Silicon and Phosphorus Compounds

Authors: А. V. Panko, Е. V. Ablets, I. G. Kovzun, М. А. Ilyashov

Abstract:

Based upon generalized analysis of modern know-how in the sphere of processing, concentration and purification of iron-ore raw materials (IORM), in particular, the most widespread ferrioxide-silicate materials (FOSM), containing impurities of phosphorus and other elements compounds, noted special role of nano technological initiatives in improvement of such processes. Considered ideas of role of nano particles in processes of FOSM carbonization with subsequent direct reduction of ferric oxides contained in them to metal phase, as well as in processes of alkali treatment and separation of powered iron from phosphorus compounds. Using the obtained results the wasteless solid-phase processing, concentration and purification of IORM and FOSM from compounds of phosphorus, silicon and other impurities excelling known methods of direct iron reduction from iron ores and metallurgical slimes.

Keywords: iron ores, solid-phase reduction, nanoparticles in reduction and purification of iron from silicon and phosphorus, wasteless method of ores processing

Procedia PDF Downloads 467
3491 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality

Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye

Abstract:

When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.

Keywords: word embeddings, k-mer embedding, dimensionality reduction

Procedia PDF Downloads 119
3490 Cost Effective Real-Time Image Processing Based Optical Mark Reader

Authors: Amit Kumar, Himanshu Singal, Arnav Bhavsar

Abstract:

In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.

Keywords: OMR, image processing, hough circle trans-form, interpolation, detection, binary thresholding

Procedia PDF Downloads 150
3489 Mixotropohic Growth of Chlorella sp. on Raw Food Processing Industrial Wastewater: Effect of COD Tolerance

Authors: Suvidha Gupta, R. A. Pandey, Sanjay Pawar

Abstract:

The effluents from various food processing industries are found with high BOD, COD, suspended solids, nitrate, and phosphate. Mixotrophic growth of microalgae using food processing industrial wastewater as an organic carbon source has emerged as more effective and energy intensive means for the nutrient removal and COD reduction. The present study details the treatment of non-sterilized unfiltered food processing industrial wastewater by microalgae for nutrient removal as well as to determine the tolerance to COD by taking different dilutions of wastewater. In addition, the effect of different inoculum percentages of microalgae on removal efficiency of the nutrients for given dilution has been studied. To see the effect of dilution and COD tolerance, the wastewater having initial COD 5000 mg/L (±5), nitrate 28 mg/L (±10), and phosphate 24 mg/L (±10) was diluted to get COD of 3000 mg/L and 1000 mg/L. The experiments were carried out in 1L conical flask by intermittent aeration with different inoculum percentage i.e. 10%, 20%, and 30% of Chlorella sp. isolated from nearby area of NEERI, Nagpur. The experiments were conducted for 6 days by providing 12:12 light- dark period and determined various parameters such as COD, TOC, NO3-- N, PO4-- P, and total solids on daily basis. Results revealed that, for 10% and 20% inoculum, over 90% COD and TOC reduction was obtained with wastewater containing COD of 3000 mg/L whereas over 80% COD and TOC reduction was obtained with wastewater containing COD of 1000 mg/L. Moreover, microalgae was found to tolerate wastewater containing COD 5000 mg/L and obtained over 60% and 80% reduction in COD and TOC respectively. The obtained results were found similar with 10% and 20% inoculum in all COD dilutions whereas for 30% inoculum over 60% COD and 70% TOC reduction was obtained. In case of nutrient removal, over 70% nitrate removal and 45% phosphate removal was obtained with 20% inoculum in all dilutions. The obtained results indicated that Microalgae assisted nutrient removal gives maximum COD and TOC reduction with 3000 mg/L COD and 20% inoculum. Hence, microalgae assisted wastewater treatment is not only effective for removal of nutrients but also can tolerate high COD up to 5000 mg/L and solid content.

Keywords: Chlorella sp., chemical oxygen demand, food processing industrial wastewater, mixotrophic growth

Procedia PDF Downloads 313
3488 Design for Filter and Transitions to Substrat Integated Waveguide at Ka Band

Authors: Damou Mehdi, Nouri Keltouma, Fahem Mohammed

Abstract:

In this paper, the concept of substrate integrated waveguide (SIW) technology is used to design filter for 30 GHz communication systems. SIW is created in the substrate of RT/Duroid 5880 having relative permittivity ε_r= 2.2 and loss tangent tanφ = 0.0009. Four Via are placed on the century filter the structures of SIW are modeled using and have been optimized in software HFSS (High Frequency Structure Simulator), à transition is designed for a Ka-band transceiver module with a 28.5GHz center frequency, . and then the results are verified using another simulation CST Microwave Studio (Computer Simulation Technology). The return loss are less than -18 dB, and -13 dB respectively. The insertion loss is divided equally -1.2 dB and -1.4 respectively.

Keywords: transition, microstrip, substrat integrated wave guide, filter, via

Procedia PDF Downloads 636
3487 Tool Condition Monitoring of Ceramic Inserted Tools in High Speed Machining through Image Processing

Authors: Javier A. Dominguez Caballero, Graeme A. Manson, Matthew B. Marshall

Abstract:

Cutting tools with ceramic inserts are often used in the process of machining many types of superalloy, mainly due to their high strength and thermal resistance. Nevertheless, during the cutting process, the plastic flow wear generated in these inserts enhances and propagates cracks due to high temperature and high mechanical stress. This leads to a very variable failure of the cutting tool. This article explores the relationship between the continuous wear that ceramic SiAlON (solid solutions based on the Si3N4 structure) inserts experience during a high-speed machining process and the evolution of sparks created during the same process. These sparks were analysed through pictures of the cutting process recorded using an SLR camera. Features relating to the intensity and area of the cutting sparks were extracted from the individual pictures using image processing techniques. These features were then related to the ceramic insert’s crater wear area.

Keywords: ceramic cutting tools, high speed machining, image processing, tool condition monitoring, tool wear

Procedia PDF Downloads 281
3486 High-Temperature Behavior of Boiler Steel by Friction Stir Processing

Authors: Supreet Singh, Manpreet Kaur, Manoj Kumar

Abstract:

High temperature corrosion is an imperative material degradation method experienced in thermal power plants and other energy generation sectors. Metallic materials such as ferritic steels have special properties such as easy fabrication and machinibilty, low cost, but a serious drawback of these materials is the worsening in properties initiating from the interaction with the environments. The metallic materials do not endure higher temperatures for extensive period of time because of their poor corrosion resistance. Friction Stir Processing (FSP), has emerged as the potent surface modification means and control of microstructure in thermo mechanically heat affecting zones of various metal alloys. In the current research work, FSP was done on the boiler tube of SA 210 Grade A1 material which is regularly used by thermal power plants. The strengthening of SA210 Grade A1 boiler steel through microstructural refinement by Friction Stir Processing (FSP) and analyze the effect of the same on high temperature corrosion behavior. The high temperature corrosion performance of the unprocessed and the FSPed specimens were evaluated in the laboratory using molten salt environment of Na₂SO₄-82%Fe₂(SO₄). The unprocessed and FSPed low carbon steel Gr A1 evaluation was done in terms of microstructure, corrosion resistance, mechanical properties like hardness- tensile. The in-depth characterization was done by EBSD, SEM/EDS and X-ray mapping analyses with an aim to propose the mechanism behind high temperature corrosion behavior of the FSPed steel.

Keywords: boiler steel, characterization, corrosion, EBSD/SEM/EDS/XRD, friction stir processing

Procedia PDF Downloads 224
3485 Reduction of Residual Stress by Variothermal Processing and Validation via Birefringence Measurement Technique on Injection Molded Polycarbonate Samples

Authors: Christoph Lohr, Hanna Wund, Peter Elsner, Kay André Weidenmann

Abstract:

Injection molding is one of the most commonly used techniques in the industrial polymer processing. In the conventional process of injection molding, the liquid polymer is injected into the cavity of the mold, where the polymer directly starts hardening at the cooled walls. To compensate the shrinkage, which is caused predominantly by the immediate cooling, holding pressure is applied. Through that whole process, residual stresses are produced by the temperature difference of the polymer melt and the injection mold and the relocation of the polymer chains, which were oriented by the high process pressures and injection speeds. These residual stresses often weaken or change the structural behavior of the parts or lead to deformation of components. One solution to reduce the residual stresses is the use of variothermal processing. Hereby the mold is heated – i.e. near/over the glass transition temperature of the polymer – the polymer is injected and before opening the mold and ejecting the part the mold is cooled. For the next cycle, the mold gets heated again and the procedure repeats. The rapid heating and cooling of the mold are realized indirectly by convection of heated and cooled liquid (here: water) which is pumped through fluid channels underneath the mold surface. In this paper, the influences of variothermal processing on the residual stresses are analyzed with samples in a larger scale (500 mm x 250 mm x 4 mm). In addition, the influence on functional elements, such as abrupt changes in wall thickness, bosses, and ribs, on the residual stress is examined. Therefore the polycarbonate samples are produced by variothermal and isothermal processing. The melt is injected into a heated mold, which has in our case a temperature varying between 70 °C and 160 °C. After the filling of the cavity, the closed mold is cooled down varying from 70 °C to 100 °C. The pressure and temperature inside the mold are monitored and evaluated with cavity sensors. The residual stresses of the produced samples are illustrated by birefringence where the effect on the refractive index on the polymer under stress is used. The colorful spectrum can be uncovered by placing the sample between a polarized light source and a second polarization filter. To show the achievement and processing effects on the reduction of residual stress the birefringence images of the isothermal and variothermal produced samples are compared and evaluated. In this comparison to the variothermal produced samples have a lower amount of maxima of each color spectrum than the isothermal produced samples, which concludes that the residual stress of the variothermal produced samples is lower.

Keywords: birefringence, injection molding, polycarbonate, residual stress, variothermal processing

Procedia PDF Downloads 265
3484 On the Dwindling Supply of the Observable Cosmic Microwave Background Radiation

Authors: Jia-Chao Wang

Abstract:

The cosmic microwave background radiation (CMB) freed during the recombination era can be considered as a photon source of small duration; a one-time event happened everywhere in the universe simultaneously. If space is divided into concentric shells centered at an observer’s location, one can imagine that the CMB photons originated from the nearby shells would reach and pass the observer first, and those in shells farther away would follow as time goes forward. In the Big Bang model, space expands rapidly in a time-dependent manner as described by the scale factor. This expansion results in an event horizon coincident with one of the shells, and its radius can be calculated using cosmological calculators available online. Using Planck 2015 results, its value during the recombination era at cosmological time t = 0.379 million years (My) is calculated to be Revent = 56.95 million light-years (Mly). The event horizon sets a boundary beyond which the freed CMB photons will never reach the observer. The photons within the event horizon also exhibit a peculiar behavior. Calculated results show that the CMB observed today was freed in a shell located at 41.8 Mly away (inside the boundary set by Revent) at t = 0.379 My. These photons traveled 13.8 billion years (Gy) to reach here. Similarly, the CMB reaching the observer at t = 1, 5, 10, 20, 40, 60, 80, 100 and 120 Gy are calculated to be originated at shells of R = 16.98, 29.96, 37.79, 46.47, 53.66, 55.91, 56.62, 56.85 and 56.92 Mly, respectively. The results show that as time goes by, the R value approaches Revent = 56.95 Mly but never exceeds it, consistent with the earlier statement that beyond Revent the freed CMB photons will never reach the observer. The difference Revert - R can be used as a measure of the remaining observable CMB photons. Its value becomes smaller and smaller as R approaching Revent, indicating a dwindling supply of the observable CMB radiation. In this paper, detailed dwindling effects near the event horizon are analyzed with the help of online cosmological calculators based on the lambda cold dark matter (ΛCDM) model. It is demonstrated in the literature that assuming the CMB to be a blackbody at recombination (about 3000 K), then it will remain so over time under cosmological redshift and homogeneous expansion of space, but with the temperature lowered (2.725 K now). The present result suggests that the observable CMB photon density, besides changing with space expansion, can also be affected by the dwindling supply associated with the event horizon. This raises the question of whether the blackbody of CMB at recombination can remain so over time. Being able to explain the blackbody nature of the observed CMB is an import part of the success of the Big Bang model. The present results cast some doubts on that and suggest that the model may have an additional challenge to deal with.

Keywords: blackbody of CMB, CMB radiation, dwindling supply of CMB, event horizon

Procedia PDF Downloads 106
3483 Image Processing of Scanning Electron Microscope Micrograph of Ferrite and Pearlite Steel for Recognition of Micro-Constituents

Authors: Subir Gupta, Subhas Ganguly

Abstract:

In this paper, we demonstrate the new area of application of image processing in metallurgical images to develop the more opportunity for structure-property correlation based approaches of alloy design. The present exercise focuses on the development of image processing tools suitable for phrase segmentation, grain boundary detection and recognition of micro-constituents in SEM micrographs of ferrite and pearlite steels. A comprehensive data of micrographs have been experimentally developed encompassing the variation of ferrite and pearlite volume fractions and taking images at different magnification (500X, 1000X, 15000X, 2000X, 3000X and 5000X) under scanning electron microscope. The variation in the volume fraction has been achieved using four different plain carbon steel containing 0.1, 0.22, 0.35 and 0.48 wt% C heat treated under annealing and normalizing treatments. The obtained data pool of micrographs arbitrarily divided into two parts to developing training and testing sets of micrographs. The statistical recognition features for ferrite and pearlite constituents have been developed by learning from training set of micrographs. The obtained features for microstructure pattern recognition are applied to test set of micrographs. The analysis of the result shows that the developed strategy can successfully detect the micro constitutes across the wide range of magnification and variation of volume fractions of the constituents in the structure with an accuracy of about +/- 5%.

Keywords: SEM micrograph, metallurgical image processing, ferrite pearlite steel, microstructure

Procedia PDF Downloads 185
3482 Development of Fake News Model Using Machine Learning through Natural Language Processing

Authors: Sajjad Ahmed, Knut Hinkelmann, Flavio Corradini

Abstract:

Fake news detection research is still in the early stage as this is a relatively new phenomenon in the interest raised by society. Machine learning helps to solve complex problems and to build AI systems nowadays and especially in those cases where we have tacit knowledge or the knowledge that is not known. We used machine learning algorithms and for identification of fake news; we applied three classifiers; Passive Aggressive, Naïve Bayes, and Support Vector Machine. Simple classification is not completely correct in fake news detection because classification methods are not specialized for fake news. With the integration of machine learning and text-based processing, we can detect fake news and build classifiers that can classify the news data. Text classification mainly focuses on extracting various features of text and after that incorporating those features into classification. The big challenge in this area is the lack of an efficient way to differentiate between fake and non-fake due to the unavailability of corpora. We applied three different machine learning classifiers on two publicly available datasets. Experimental analysis based on the existing dataset indicates a very encouraging and improved performance.

Keywords: fake news detection, natural language processing, machine learning, classification techniques.

Procedia PDF Downloads 141
3481 Induction Machine Bearing Failure Detection Using Advanced Signal Processing Methods

Authors: Abdelghani Chahmi

Abstract:

This article examines the detection and localization of faults in electrical systems, particularly those using asynchronous machines. First, the process of failure will be characterized, relevant symptoms will be defined and based on those processes and symptoms, a model of those malfunctions will be obtained. Second, the development of the diagnosis of the machine will be shown. As studies of malfunctions in electrical systems could only rely on a small amount of experimental data, it has been essential to provide ourselves with simulation tools which allowed us to characterize the faulty behavior. Fault detection uses signal processing techniques in known operating phases.

Keywords: induction motor, modeling, bearing damage, airgap eccentricity, torque variation

Procedia PDF Downloads 119
3480 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 493
3479 Antibacterial and Anti-Biofilm Activity of Vaccinium meridionale S. Pomace Extract Against Staphylococcus aureus, Escherichia coli and Salmonella Enterica

Authors: Carlos Y. Soto, Camila A. Lota, G. Astrid Garzón

Abstract:

Bacterial biofilms cause an ongoing problem for food safety. They are formed when microorganisms aggregate to form a community that attaches to solid surfaces. Biofilms increase the resistance of pathogens to cleaning, disinfection and antibacterial products. This resistance gives rise to problems for human health, industry, and agriculture. At present, plant extracts rich in polyphenolics are being investigated as natural alternatives to degrade bacterial biofilms. The pomace of the tropical Berry Vaccinium meridionale S. contains high amounts of phenolic compounds. Therefore, in the current study, the antimicrobial and antibiofilm effects of extracts from the pomace of Vaccinium meridionale S. were tested on three foodborne pathogens: Enterohaemorrhagic Escherichia coli O157:H7 (ATCC®700728TM), Staphylococcus aureus subsp. aureus (ATCC® 6538TM), and Salmonella enterica serovar Enteritidis (ATCC® 13076TM). Microwave-assisted extraction was used to extract polyphenols with aqueous methanol (80% v/v) at a solid to solvent ratio of 1:10 (w/v) for 20 min. The magnetic stirring was set at 400 rpm, and the microwave power was adjusted to 400 W. The antimicrobial effect of the extract was assessed by determining the half maximal inhibitory concentration (IC50) against the three food poisoning pathogens at concentrations ranging from 50 to 2,850 μg gallic acid equivalents (GAE)/mL of the extract. Biofilm inhibition was assessed using a crystal violet assay applying the same range of concentration. Three replications of the experiments were carried out, and all analyses were run in triplicate. IC50 values were determined using the GraphPad Prism8® program. Significant differences (P<0.05) among means were identified using one-factor analysis of variance (ANOVA) and the post-hoc least significant difference (LSD) test using the Statgraphics plus program, version 2.1.There was significant difference among the mean IC50 values for the tested bacteria. The IC50 for S. aureus was 48 ± 9 μg GAE/mL, followed by 123 ± 49 μg GAE/mL for Salmonella and 376 ± 32 μg GAE/mL for E. coli. The percent inhibition of the extract on biofilm formation was significantly higher for S. aureus (85.8  0.3), followed by E. coli (74.5  1.0) and Salmonella (53.6  9.7). These findings suggest that polyphenolic extracts obtained from the pomace of V. meridionale S. might be used as natural antimicrobial and anti-biofilm natural agents, effective against S. aureus, E. coli and Salmonella enterica.

Keywords: antibiofilm, antimicrobial, E. coli, S. aureus, salmonella, IC50, pomace, V. meridionale

Procedia PDF Downloads 47
3478 General Purpose Graphic Processing Units Based Real Time Video Tracking System

Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai

Abstract:

Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.

Keywords: connected components, embrace threads, local weighted kernel, structuring elements

Procedia PDF Downloads 417
3477 Parallel Processing in near Absence of Attention: A Study Using Dual-Task Paradigm

Authors: Aarushi Agarwal, Tara Singh, I.L Singh, Anju Lata Singh, Trayambak Tiwari

Abstract:

Simple discrimination in near absence of attention has been widely observed. Dual-task studies with natural scenes studies have been claimed as being preattentive in nature that facilitated categorization simultaneously with the attentional demanding task. So in this study, multiple images at the periphery are presented, initiating parallel processing in near absence of attention. For the central demanding task rotated letters were presented in both conditions, while in periphery natural and animal images were presented. To understand the breakpoint of ability to perform in near absence of attention one, two and three peripheral images were presented simultaneously with central task and subjects had to respond when all belong to the same category. Individual participant performance did not show a significant difference in both conditions central and peripheral task when the single peripheral image was shown. In case of two images high-level parallel processing could take place with little attentional resources. The eye tracking results supports the evidence as no major saccade was made in a large number of trials. Three image presentations proved to be a breaking point of the capacities to perform outside attentional assistance as participants showed a confused eye gaze pattern which failed to make the natural and animal image discriminations. Thus, we can conclude attention and awareness being independent mechanisms having limited capacities.

Keywords: attention, dual task pardigm, parallel processing, break point, saccade

Procedia PDF Downloads 204
3476 Short-Term Effects of an Open Monitoring Meditation on Cognitive Control and Information Processing

Authors: Sarah Ullrich, Juliane Rolle, Christian Beste, Nicole Wolff

Abstract:

Inhibition and cognitive flexibility are essential parts of executive functions in our daily lives, as they enable the avoidance of unwanted responses or selectively switch between mental processes to generate appropriate behavior. There is growing interest in improving inhibition and response selection through brief mindfulness-based meditations. Arguably, open-monitoring meditation (OMM) improves inhibitory and flexibility performance by optimizing cognitive control and information processing. Yet, the underlying neurophysiological processes have been poorly studied. Using the Simon-Go/Nogo paradigm, the present work examined the effect of a single 15-minute smartphone app-based OMM on inhibitory performance and response selection in meditation novices. We used both behavioral and neurophysiological measures (event-related potentials, ERPs) to investigate which subprocesses of response selection and inhibition are altered after OMM. The study was conducted in a randomized crossover design with N = 32 healthy adults. We thereby investigated Go and Nogo trials in the paradigm. The results show that as little as 15 minutes of OMM can improve response selection and inhibition at behavioral and neurophysiological levels. More specifically, OMM reduces the rate of false alarms, especially during Nogo trials regardless of congruency. It appears that OMM optimizes conflict processing and response inhibition compared to no meditation, also reflected in the ERP N2 and P3 time windows. The results may be explained by the meta control model, which argues in terms of a specific processing mode with increased flexibility and inclusive decision-making under OMM. Importantly, however, the effects of OMM were only evident when there was the prior experience with the task. It is likely that OMM provides more cognitive resources, as the amplitudes of these EKPs decreased. OMM novices seem to induce finer adjustments during conflict processing after familiarization with the task.

Keywords: EEG, inhibition, meditation, Simon Nogo

Procedia PDF Downloads 188
3475 Increasing Added-Value of Salak Fruit by Freezing Frying to Improve the Welfare of Farmers: Case Study of Sleman Regency, Yogyakarta-Indonesia

Authors: Sucihatiningsih Dian Wisika Prajanti, Himawan Arif Susanto

Abstract:

Fruits are perishable products and have relatively low price, especially at harvest time. Generally, farmers only sell the products shortly after the harvest time without any processing. Farmers also only play role as price takers leading them to have less power to set the price. Sometimes, farmers are manipulated by middlemen, especially during abundant harvest. Therefore, it requires an effort to cultivate fruits and create innovation to make them more durable and have higher economic value. The purpose of this research is how to increase the added- value of fruits that have high economic value. The research involved 60 farmers of Salak fruit as the sample. Then, descriptive analysis was used to analyze the data in this study. The results showed the selling price of Salak fruit is very low. Hence, to increase the added-value of the fruits, fruit processing is carried out by freezing - frying which can cause the fruits last longer. In addition to increase these added-value, the products can be accommodated for further processed without worrying about their crops rotted or unsold.

Keywords: fruits processing, Salak fruit, freezing frying, farmer’s welfare, Sleman, Yogyakarta

Procedia PDF Downloads 330