Search results for: analog signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5053

Search results for: analog signal processing

3673 Analysis of Translational Ship Oscillations in a Realistic Environment

Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting

Abstract:

To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.

Keywords: extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation

Procedia PDF Downloads 509
3672 Image Segmentation Techniques: Review

Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo

Abstract:

Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.

Keywords: clustering-based, convolution-network, edge-based, region-growing

Procedia PDF Downloads 70
3671 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 183
3670 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 101
3669 Extracellular Enzymes from Halophilic Bacteria with Potential in Agricultural Secondary Flow Recovery Products

Authors: Madalin Enache, Simona Neagu, Roxana Cojoc, Ioana Gomoiu, Delia Ionela Dobre, Ancuta Roxana Trifoi

Abstract:

Various types of halophilic and halotolerant microorganisms able to be cultivated in laboratory on culture media with a wide range of sodium chloride content are isolated from several salted environments. The extracellular enzymes of these microorganisms showed the enzymatic activity in these spectrums of salinity thus being attractive for several biotechnological processes developed at high ionic strength. In present work, a number of amylase, protease, esterase, lipase, cellulase, pectinase, xilanases and innulinase were identified for more than 50th bacterial strains isolated from water samples and sapropelic mud from four saline and hypersaline lakes located in Romanian plain. On the other hand, the cellulase and pectinase activity were also detected in some halotolerant microorganisms isolated from secondary agricultural flow of grapes processing. The preliminary data revealed that from totally tested strains seven harbor proteases activity, eight amylase activity, four for esterase and another four for lipase, three for pectinase and for one strain were identified either cellulase or pectinase activity. There were no identified enzymes able to hydrolase innulin added to culture media. Several strains isolated from sapropelic mud showed multiple extracellular enzymatic activities, namely three strains harbor three activities and another seven harbor two activities. The data revealed that amylase and protease activities were frequently detected if compare with other tested enzymes. In the case of pectinase were investigated, their ability to be used for increasing resveratrol recovery from material resulted after grapes processing. In this way, the resulted material from grapes processing was treated with microbial supernatant for several times (two, four and 24 hours) and the content of resveratrol was detected by High Performance Liquid Chromatography method (HPLC). The preliminary data revealed some positive results of this treatment.

Keywords: halophilic microorganisms, enzymes, pectinase, salinity

Procedia PDF Downloads 173
3668 Adapting Grain Crop Cleaning Equipment for Sesame and Other Emerging Spice Crops

Authors: Ramadas Narayanan, Surya Bhattrai, Vu Hoan

Abstract:

Threshing and cleaning are crucial post-harvest procedures that are carried out to separate the grain or seed from the harvested plant and eliminate any potential contaminants or foreign debris. After harvesting, threshing and cleaning are necessary for the clean seeds to guarantee high quality and acceptable for consumption or further processing. For mechanised production, threshing can be conducted in a thresher. Afterwards, the seeds are to be cleaned in dedicated seed-cleaning facilities. This research investigates the effectiveness of Kimseed cleaning equipment MK3, designed for grain crops for processing new crops such as sesame, fennel and kalonji. Subsequently, systematic trials were conducted to adapt the equipment to the applications in sesame and spice crops. It was done to develop methods for mechanising harvest and post-harvest operations. For sesame, it is recommended to have t a two-step process in the cleaning machine to remove large and small contaminants. The first step is to remove the large contaminants, and the second is to remove the smaller ones. The optimal parameters for cleaning fennel are a shaker frequency of 6.0 to 6.5 Hz and an airflow of 1.0 to 1.5 m/s. The optimal parameters for cleaning kalonji are a shaker frequency of 5.5Hz to 6.0 Hz and airflow of 1.0 to under 1.5m/s.

Keywords: sustainable mechanisation, sead cleaning process, optimal setting, shaker frequency

Procedia PDF Downloads 57
3667 Obtaining Nutritive Powder from Peel of Mangifera Indica L. (Mango) as a Food Additive

Authors: Chajira Garrote, Laura Arango, Lourdes Merino

Abstract:

This research explains how to obtain nutritious powder from a variety of ripe mango peels Hilacha (Mangifera indica L.) to use it as a food additive. Also, this study intends to use efficiently the by-products resulting from the operations of mango pulp manufacturing process by processing companies with the aim of giving them an added value. The physical and chemical characteristics of the mango peels and the benefits that may help humans, were studied. Unit operations are explained for the processing of mango peels and the production of nutritive powder as a food additive. Emphasis is placed on the preliminary operations applied to the raw material and on the drying method, which is very important in this project to obtain the suitable characteristics of the nutritive powder. Once the powder was obtained, it was subjected to laboratory tests to determine its functional properties: water retention capacity (WRC) and oil retention capacity (ORC), also a sensory analysis for the powder was performed to determine the product profile. The nutritive powder from the ripe mango peels reported excellent WRC and ORC values: 7.236 g of water / g B.S. and 1.796 g water / g B.S. respectively and the sensory analysis defined a complete profile of color, odor and texture of the nutritive powder, which is suitable to use it in the food industry.

Keywords: mango, peel, powder, nutritive, functional properties, sensory analysis

Procedia PDF Downloads 342
3666 Effective Solvents for Proteins Recovery from Microalgae

Authors: Win Nee Phong, Tau Chuan Ling, Pau Loke Show

Abstract:

From an industrial perspective, the exploitation of microalgae for protein source is of great economical and commercial interest due to numerous attractive characteristics. Nonetheless, the release of protein from microalgae is limited by the multiple layers of the rigid thick cell wall that generally contain a large proportion of cellulose. Thus an efficient cell disruption process is required to rupture the cell wall. The conventional downstream processing methods which typically involve several unit operational steps such as disruption, isolation, extraction, concentration and purification are energy-intensive and costly. To reduce the overall cost and establish a feasible technology for the success of the large-scale production, microalgal industry today demands a more cost-effective and eco-friendly technique in downstream processing. One of the main challenges to extract the proteins from microalgae is the presence of rigid cell wall. This study aims to provide some guidance on the selection of the efficient solvent to facilitate the proteins released during the cell disruption process. The effects of solvent types such as methanol, ethanol, 1-propanol and water in rupturing the microalgae cell wall were studied. It is interesting to know that water is the most effective solvent to recover proteins from microalgae and the cost is cheapest among all other solvents.

Keywords: green, microalgae, protein, solvents

Procedia PDF Downloads 240
3665 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 426
3664 Regulation on the Protection of Personal Data Versus Quality Data Assurance in the Healthcare System Case Report

Authors: Elizabeta Krstić Vukelja

Abstract:

Digitization of personal data is a consequence of the development of information and communication technologies that create a new work environment with many advantages and challenges, but also potential threats to privacy and personal data protection. Regulation (EU) 2016/679 of the European Parliament and of the Council is becoming a law and obligation that should address the issues of personal data protection and information security. The existence of the Regulation leads to the conclusion that national legislation in the field of virtual environment, protection of the rights of EU citizens and processing of their personal data is insufficiently effective. In the health system, special emphasis is placed on the processing of special categories of personal data, such as health data. The healthcare industry is recognized as a particularly sensitive area in which a large amount of medical data is processed, the digitization of which enables quick access and quick identification of the health insured. The protection of the individual requires quality IT solutions that guarantee the technical protection of personal categories. However, the real problems are the technical and human nature and the spatial limitations of the application of the Regulation. Some conclusions will be drawn by analyzing the implementation of the basic principles of the Regulation on the example of the Croatian health care system and comparing it with similar activities in other EU member states.

Keywords: regulation, healthcare system, personal dana protection, quality data assurance

Procedia PDF Downloads 22
3663 Effect of Humor on Pain and Anxiety in Patients with Rheumatoi̇d Arthri̇ti̇s: A Prospective, Randomized Controlled Study

Authors: Burcu Babadağ Savaş, Nihal Orlu, Güler Balcı Alparslan, Ertuğrul Çolak, Cengiz Korkmaz

Abstract:

Introduction/objectives: We aimed to investigate the effect of humor on pain and state anxiety in patients with rheumatoid arthritis (RA) receiving biologic intravenous (IV) infusion therapy. Method: The study sample consisted of 36 patients who met the classification criteria for RA and inclusion criteria in a rheumatology outpatient clinic at a university hospital between September 2020 and November 2021. Two sample groups were formed: the intervention group (watching a comedy movie) (n=18) and the control group (n=18). The intervention group consisted of the patient watching a comedy movie of his/her choice from an archive created by the researchers during the biological IV infusion therapy (approximately 90-120 minutes). The data collection instruments used before and after the test were the descriptive identification form, the visual analog scale (VAS), and the state anxiety scale. Results: The mean VAS scores of patients in the intervention group were 5.05 ± 2.01 in the pre-test and 2.61 ± 1.91 in the post-test. The mean state anxiety scores of patients in the intervention group were 45.94 ± 9.97 in the pre-test and 34.22 ± 6.57 in the post-test. Thus, patients who watched comedy movies during biologic IV infusion therapy in the infusion center had a greater reduction in pain scores than the control group and the effect size was small. Although there was a decrease in state anxiety scores in both groups, there was no significant difference between groups and the effect size was not relevant. Conclusions: During IV infusion therapy, watching comedy movies is recommended as a nursing care intervention for reducing pain in patients with RA in cooperation with other health professionals.

Keywords: watching comedy movie, humor, pain, anxiety, nursing, care

Procedia PDF Downloads 126
3662 Experimental and Modelling Performances of a Sustainable Integrated System of Conditioning for Bee-Pollen

Authors: Andrés Durán, Brian Castellanos, Marta Quicazán, Carlos Zuluaga-Domínguez

Abstract:

Bee-pollen is an apicultural-derived food product, with a growing appreciation among consumers given the remarkable nutritional and functional composition, in particular, protein (24%), dietary fiber (15%), phenols (15 – 20 GAE/g) and carotenoids (600 – 900 µg/g). These properties are given by the geographical and climatic characteristics of the region where it is collected. There are several countries recognized by their pollen production, e.g. China, United States, Japan, Spain, among others. Beekeepers use traps in the entrance of the hive where bee-pollen is collected. After the removal of foreign particles and drying, this product is ready to be marketed. However, in countries located along the equator, the absence of seasons and a constant tropical climate throughout the year favors a more rapid spoilage condition for foods with elevated water activity. The climatic conditions also trigger the proliferation of microorganisms and insects. This, added to the factor that beekeepers usually do not have adequate processing systems for bee-pollen, leads to deficiencies in the quality and safety of the product. In contrast, the Andean region of South America, lying on equator, typically has a high production of bee-pollen of up to 36 kg/year/hive, being four times higher than in countries with marked seasons. This region is also located in altitudes superior to 2500 meters above sea level, having extremes sun ultraviolet radiation all year long. As a mechanism of defense of radiation, plants produce more secondary metabolites acting as antioxidant agents, hence, plant products such as bee-pollen contain remarkable more phenolics and carotenoids than collected in other places. Considering this, the improvement of bee-pollen processing facilities by technical modifications and the implementation of an integrated cleaning and drying system for the product in an apiary in the area was proposed. The beehives were modified through the installation of alternative bee-pollen traps to avoid sources of contamination. The processing facility was modified according to considerations of Good Manufacturing Practices, implementing the combined use of a cabin dryer with temperature control and forced airflow and a greenhouse-type solar drying system. Additionally, for the separation of impurities, a cyclone type system was implemented, complementary to a screening equipment. With these modifications, a decrease in the content of impurities and the microbiological load of bee-pollen was seen from the first stages, principally with a reduction of the presence of molds and yeasts and in the number of foreign animal origin impurities. The use of the greenhouse solar dryer integrated to the cabin dryer allowed the processing of larger quantities of product with shorter waiting times in storage, reaching a moisture content of about 6% and a water activity lower than 0.6, being appropriate for the conservation of bee-pollen. Additionally, the contents of functional or nutritional compounds were not affected, even observing an increase of up to 25% in phenols content and a non-significant decrease in carotenoids content and antioxidant activity.

Keywords: beekeeping, drying, food processing, food safety

Procedia PDF Downloads 91
3661 Agricultural Cooperative Model: A Panacea for Economic Development of Small Scale Business Famers in Ilesha, Osun State, Nigeria

Authors: Folasade Adegbaju, Olusola Arowolo, Olufisayo Onawumi

Abstract:

Owolowo ile – ege garri processing industry which is a small scale cassava processing industry, located in Ilesha, Osun State was purposively selected as a case study because it is a cooperative business. This industry was established in 1991 by eight men (8) who were mostly retirees. A researcher made questionnaire was used to collect information from thirty (30) respondents: the manager, four official staffs and 25 randomly selected processors in the industry. The study found that within twelve years of the utilization of their self raised initial capital of N240, 000 naira (Two hundred and forty thousand naira) this cassava – based industry had impacted on and attracted the involvement of many more people because within the period of the study (i.e. 2007-2011) the processors had quadrupled in number (e.g. 8 to 30), the facilities (equipment) in use had increased from one machine and a frying pot to many, this translated into being able to produce large quantities of fried garri, fufu and also starch for marketing to the people in Ilesha and neighbouring cities like Ibadan, Lagos, etc. This is indicative of economic growth. The industry also became a source of employment for community members in the sense that, as at the time of study four staffs were employed to work and coordinate the industry. It was observed that despite all odds of small-scale industry and the problem of people migrating from rural to urban area, this agro-based industry still existed successfully in the community, and many of such industry can be replicated by such agricultural cooperative groups nationwide so as to further boost the productivity as well as the economy of the area and nation at large. However, government and individual still have major roles to play in ensuring the growth and development of the nation in this respect.The local agricultural cooperative groups should form regional cooperative consortium with more networking for the farmers, in order to create more jobs for the young ones and to increase agricultural productivity in the country thus resulting in a better and more sustainable economy.

Keywords: agricultural cooperative, cassava processing industry, model, small scale enterprise

Procedia PDF Downloads 268
3660 Averting Food Crisis in Nigeria and Beyond, Activities of the National Food Security Programme

Authors: Musa M. Umar, S. G. Ado

Abstract:

The paper examines the activities of the National Programme for food security (NPFS) for averting food insecurity in Nigeria and beyond. The components of the NPFS include site development, outreach, community development and management support. On each site, core activities comprise crop productivity, production diversification and agro-processing. The outreach activities consist of inputs and commodity marketing, rural finance, strengthening research-extension-farmers-inputs linkages, health and nutrition and expansion of site activities. The community development activities include small-scale rural infrastructure, micro-earth dams and community forestry. The overall benefits include food security, improved productivity, marketing and processing, enhanced land and water use, increased animal production and fish catches, improved nutrition, reduction in post-harvest losses and value addition, improved rural infrastructure and diversification of production leading to improved livelihood. The NPFS would poster sustained development of small-holder agricultural and income generation.

Keywords: food-security, community development, post-harvest, production

Procedia PDF Downloads 340
3659 Effective Glosses in Reading to Help L2 Vocabulary Learning for Low-Intermediate Technology University Students in Taiwan

Authors: Pi-Lan Yang

Abstract:

It is controversial which type of gloss condition (i.e., gloss language or gloss position) is more effective in second or foreign language (L2) vocabulary learning. The present study compared the performance on learning ten English words in the conditions of L2 English reading with no glosses and with glosses of Chinese equivalents/translations and L2 English definitions at the side of a page and at an attached sheet for low-intermediate Chinese-speaking learners of English, who were technology university students in Taiwan. It is found first that the performances on the immediate posttest and the delayed posttest were overall better in the gloss condition than those in the no-gloss condition. Next, it is found that the glosses of Chinese translations were more effective and sustainable than those of L2 English definitions. Finally, the effects of L2 English glosses at the side of a page were observed to be less sustainable than those at an attached sheet. In addition, an opinion questionnaire used also showed a preference for the glosses of Chinese translations in L2 English reading. These results would be discussed in terms of automated lexical access, sentence processing mechanisms, and the trade-off nature of storage and processing functions in working memory system, proposed by the capacity theory of language comprehension.

Keywords: glosses of Chinese equivalents/translations, glosses of L2 English definitions, L2 vocabulary learning, L2 English reading

Procedia PDF Downloads 230
3658 Wastewater from the Food Industry: Characteristics and Possibilities of Sediments on the Basis of the Dairy Industry

Authors: Monika Gałwa-Widera, Anna Kwarciak–Kozłowska, Lucyna Sławik-Dembiczak

Abstract:

Issues relating to management of sewage sludge from small and medium-sized wastewater treatment plants is a vital issue, which deal with such scholars as well as those directly involved in the issue of wastewater treatment and management of sedimentary. According to the Law on Waste generating waste is responsible for such processing to the product obtained impacted on the environment minimally. In small and medium-sized wastewater treatment plants have to deal with the technology of sludge management technology is far from drying and incineration of sewage sludge. So here you can use other technologies. One of them is the composting of sewage sludge. It is a process of processing and disposal of sewage sludge that effectively their disposal. By composting, we can obtain a product that contains significant amounts of organic matter to assess the fertilizing qualities. Modifications to the ongoing process in biological reactors allow for more rapid receipt of a wholesome product. The research presented and discussed in this publication relate to assist the composting process of sewage sludge and biomass structural material in the shares of rates: 35% biomass, 55% sludge, 10% structural material using a method which involves the re-spawning batch composting physical methods leachate from the composting process.

Keywords: biomass, composting, industry, sewage sludge

Procedia PDF Downloads 425
3657 Indium-Gallium-Zinc Oxide Photosynaptic Device with Alkylated Graphene Oxide for Optoelectronic Spike Processing

Authors: Seyong Oh, Jin-Hong Park

Abstract:

Recently, neuromorphic computing based on brain-inspired artificial neural networks (ANNs) has attracted huge amount of research interests due to the technological abilities to facilitate massively parallel, low-energy consuming, and event-driven computing. In particular, research on artificial synapse that imitate biological synapses responsible for human information processing and memory is in the spotlight. Here, we demonstrate a photosynaptic device, wherein a synaptic weight is governed by a mixed spike consisting of voltage and light spikes. Compared to the device operated only by the voltage spike, ∆G in the proposed photosynaptic device significantly increased from -2.32nS to 5.95nS with no degradation of nonlinearity (NL) (potentiation/depression values were changed from 4.24/8 to 5/8). Furthermore, the Modified National Institute of Standards and Technology (MNIST) digit pattern recognition rates improved from 36% and 49% to 50% and 62% in ANNs consisting of the synaptic devices with 20 and 100 weight states, respectively. We expect that the photosynaptic device technology processed by optoelectronic spike will play an important role in implementing the neuromorphic computing systems in the future.

Keywords: optoelectronic synapse, IGZO (Indium-Gallium-Zinc Oxide) photosynaptic device, optoelectronic spiking process, neuromorphic computing

Procedia PDF Downloads 158
3656 CdS Quantum Dots as Fluorescent Probes for Detection of Naphthalene

Authors: Zhengyu Yan, Yan Yu, Jianqiu Chen

Abstract:

A novel sensing system has been designed for naphthalene detection based on the quenched fluorescence signal of CdS quantum dots. The fluorescence intensity of the system reduced significantly after adding CdS quantum dots to the water pollution model because of the fluorescent static quenching f mechanism. Herein, we have demonstrated the facile methodology can offer a convenient and low analysis cost with the recovery rate as 97.43%-103.2%, which has potential application prospect.

Keywords: CdS quantum dots, modification, detection, naphthalene

Procedia PDF Downloads 474
3655 Low-Voltage and Low-Power Bulk-Driven Continuous-Time Current-Mode Differentiator Filters

Authors: Ravi Kiran Jaladi, Ezz I. El-Masry

Abstract:

Emerging technologies such as ultra-wide band wireless access technology that operate at ultra-low power present several challenges due to their inherent design that limits the use of voltage-mode filters. Therefore, Continuous-time current-mode (CTCM) filters have become very popular in recent times due to the fact they have a wider dynamic range, improved linearity, and extended bandwidth compared to their voltage-mode counterparts. The goal of this research is to develop analog filters which are suitable for the current scaling CMOS technologies. Bulk-driven MOSFET is one of the most popular low power design technique for the existing challenges, while other techniques have obvious shortcomings. In this work, a CTCM Gate-driven (GD) differentiator has been presented with a frequency range from dc to 100MHz which operates at very low supply voltage of 0.7 volts. A novel CTCM Bulk-driven (BD) differentiator has been designed for the first time which reduces the power consumption multiple times that of GD differentiator. These GD and BD differentiator has been simulated using CADENCE TSMC 65nm technology for all the bilinear and biquadratic band-pass frequency responses. These basic building blocks can be used to implement the higher order filters. A 6th order cascade CTCM Chebyshev band-pass filter has been designed using the GD and BD techniques. As a conclusion, a low power GD and BD 6th order chebyshev stagger-tuned band-pass filter was simulated and all the parameters obtained from all the resulting realizations are analyzed and compared. Monte Carlo analysis is performed for both the 6th order filters and the results of sensitivity analysis are presented.

Keywords: bulk-driven (BD), continuous-time current-mode filters (CTCM), gate-driven (GD)

Procedia PDF Downloads 248
3654 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform

Authors: Khadija Refouh

Abstract:

Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.

Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms

Procedia PDF Downloads 127
3653 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 109
3652 Multiscale Process Modeling of Ceramic Matrix Composites

Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya

Abstract:

Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.

Keywords: digital engineering, finite elements, manufacturing, molecular dynamics

Procedia PDF Downloads 84
3651 Sunspot Cycles: Illuminating Humanity's Mysteries

Authors: Aghamusa Azizov

Abstract:

This study investigates the correlation between solar activity and sentiment in news media coverage, using a large-scale dataset of solar activity since 1750 and over 15 million articles from "The New York Times" dating from 1851 onwards. Employing Pearson's correlation coefficient and multiple Natural Language Processing (NLP) tools—TextBlob, Vader, and DistillBERT—the research examines the extent to which fluctuations in solar phenomena are reflected in the sentiment of historical news narratives. The findings reveal that the correlation between solar activity and media sentiment is generally negligible, suggesting a weak influence of solar patterns on the portrayal of events in news media. Notably, a moderate positive correlation was observed between the sentiments derived from TextBlob and Vader, indicating consistency across NLP tools. The analysis provides insights into the historical impact of solar activity on human affairs and highlights the importance of using multiple analytical methods to understand complex relationships in large datasets. The study contributes to the broader understanding of how extraterrestrial factors may intersect with media-reported events and underlines the intricate nature of interdisciplinary research in the data science and historical domains.

Keywords: solar activity correlation, media sentiment analysis, natural language processing, historical event patterns

Procedia PDF Downloads 60
3650 C2N2 Adsorption on the Surface of a BN Nanosheet: A DFT Study

Authors: Maziar Noei

Abstract:

Calculation showed that when the nanosheet is doped by Si, the adsorption energy is about -85.62 to -87.43kcal/mol and also the amount of HOMO/LUMO energy gap (Eg) will reduce significantly. Boron nitride nanosheet is a suitable adsorbent for cyanogen and can be used in separation processes cyanogen. It seems that nanosheet (BNNS) is a suitable semiconductor after doping. The doped BNNS in the presence of cyanogens (C2N2) an electrical signal is generating directly and, therefore, can potentially be used for cyanogen sensors.

Keywords: nanosheet, DFT, cyanogen, sensors

Procedia PDF Downloads 268
3649 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: lean production system, single minute exchange of dies, signal to noise ratio, Taguchi robust design, waste

Procedia PDF Downloads 112
3648 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks

Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton

Abstract:

Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.

Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition

Procedia PDF Downloads 140
3647 Performance of an Improved Fluidized System for Processing Green Tea

Authors: Nickson Kipng’etich Lang’at, Thomas Thoruwa, John Abraham, John Wanyoko

Abstract:

Green tea is made from the top two leaves and buds of a shrub, Camellia sinensis, of the family Theaceae and the order Theales. The green tea leaves are picked and immediately sent to be dried or steamed to prevent fermentation. Fluid bed drying technique is a common drying method used in drying green tea because of its ease in design and construction and fluidization of fine tea particles. Major problems in this method are significant loss of chemical content of the leaf and green appearance of tea, retention of high moisture content in the leaves and bed channeling and defluidization. The energy associated with the drying technology has been shown to be a vital factor in determining the quality of green tea. As part of the implementation, prototype dryer was built that facilitated sequence of operations involving steaming, cooling, pre-drying and final drying. The major findings of the project were in terms of quality characteristics of tea leaves and energy consumption during processing. The optimal design achieved a moisture content of 4.2 ± 0.84%. With the optimum drying temperature of 100 ºC, the specific energy consumption was 1697.8 kj.Kg-1 and evaporation rate of 4.272 x 10-4 Kg.m-2.s-1. The energy consumption in a fluidized system can be further reduced by focusing on energy saving designs.

Keywords: evaporation rate, fluid bed dryer, maceration, specific energy consumption

Procedia PDF Downloads 292
3646 Retrofitting Cement Plants with Oxyfuel Technology for Carbon Capture

Authors: Peloriadi Konstantina, Fakis Dimitris, Grammelis Panagiotis

Abstract:

Methods for carbon capture and storage (CCS) can play a key role in the reduction of industrial CO₂ emissions, especially in the cement industry, which accounts for 7% of global emissions. Cement industries around the world have committed to address this problem by reaching carbon neutrality by the year 2050. The aim of the work to be presented was to contribute to the decarbonization strategy by integrating the 1st generation oxyfuel technology in cement production plants. This technology has been shown to improve fuel efficiency while providing one of the most cost-effective solutions when compared to other capture methods. A validated simulation of the cement plant was thus used as a basis to develop an oxyfuel retrofitted cement process. The process model for the oxyfuel technology is developed on the ASPEN (Advanced System for Process Engineering) PLUSTM simulation software. This process consists of an Air Separation Unit (ASU), an oxyfuel cement plant with coal and alternative solid fuel (ASF) as feedstock, and a carbon dioxide processing unit (CPU). A detailed description and analysis of the CPU will be presented, including the findings of a literature review and simulation results, regarding the effects of flue gas impurities during operation. Acknowledgment: This research has been conducted in the framework of the EU funded AC2OCEM project, which investigates first and the second generation oxyfuel concepts.

Keywords: oxyfuel technology, carbon capture and storage, CO₂ processing unit, cement, aspen plus

Procedia PDF Downloads 167
3645 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel

Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi

Abstract:

The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.

Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point

Procedia PDF Downloads 87
3644 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 100