Search results for: Neural Processing Element.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3937

Search results for: Neural Processing Element.

247 Artificial Intelligence: A Comprehensive and Systematic Literature Review of Applications and Comparative Technologies

Authors: Z. M. Najmi

Abstract:

Over the years, the question around Artificial Intelligence has always been one with many answers. Whether by means of use in business and industry or complicated algorithmic programming, management of these technologies has always been the core focus. More recently, technologies have been questioned in industry and society alike as to whether they have improved human-centred design, assisted choices and objectives, and had a hand in systematic processes across the board. With these questions the answer may lie within AI technologies, and the steps needed in removing common human error. Elements such as Machine Learning, Deep Learning, Recommender Systems and Natural Language Processing will all be features to consider moving forward. Our previous intervention with AI applications has resulted in increased productivity, however, raised concerns for the continuation of traditional human-centred occupations. Emerging technologies such as Augmented Reality and Virtual Reality have all played a part in this during AI’s prominent rise. As mentioned, AI has been constantly under the microscope; the benefits and drawbacks may seem endless is wide, but AI is something we must take notice of and adapt into our everyday lives. The aim of this paper is to give an overview of the technologies surrounding A.I. and its’ related technologies. A comprehensive review has been written as a timeline of the developing events and key points in the history of Artificial Intelligence. This research is gathered entirely from secondary research, academic statements of knowledge and gathered to produce an understanding of the timeline of AI.

Keywords: Artificial Intelligence, Deep Learning, Augmented Reality, Reinforcement Learning, Machine Learning, Supervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 502
246 Processing and Assessment of Quality Characteristics of Composite Baby Foods

Authors: Reihaneh Ahmadzadeh Ghavidel, Mehdi Ghiafeh Davoodi

Abstract:

The usefulness of weaning foods to meet the nutrient needs of children is well recognized, and most of them are precooked roller dried mixtures of cereal and/or legume flours which posses a high viscosity and bulk when reconstituted. The objective of this study was to formulate composite weaning foods using cereals, malted legumes and vegetable powders and analyze them for nutrients, functional properties and sensory attributes. Selected legumes (green gram and lentil) were germinated, dried and dehulled. Roasted wheat, rice, carrot powder and skim milk powder also were used. All the ingredients were mixed in different proportions to get four formulations, made into 30% slurry and dried in roller drier. The products were analyzed for proximate principles, mineral content, functional and sensory qualities. The results of analysis showed following range of constituents per 100g of formulations on dry weight basis, protein, 18.1-18.9 g ; fat, 0.78-1.36 g ; iron, 5.09-6.53 mg; calcium, 265-310 mg. The lowest water absorption capacity was in case of wheat green gram based and the highest was in rice lentil based sample. Overall sensory qualities of all foods were graded as “good" and “very good" with no significant differences. The results confirm that formulated weaning foods were nutritionally superior, functionally appropriate and organoleptically acceptable.

Keywords: malted legumes, weaning foods, nutrition, functional properties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
245 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, Six Sigma, Surface roughness, Taguchi methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1025
244 Production and Purification of Monosaccharides by Hydrolysis of Sugar Cane Bagasse in an Ionic Liquid Medium

Authors: T. R. Bandara, H. Jaelani, G. J. Griffin

Abstract:

The conversion of lignocellulosic waste materials, such as sugar cane bagasse, to biofuels such as ethanol has attracted significant interest as a potential element for transforming transport fuel supplies to totally renewable sources. However, the refractory nature of the cellulosic structure of lignocellulosic materials has impeded progress on developing an economic process, whereby the cellulose component may be effectively broken down to glucose monosaccharides and then purified to allow downstream fermentation. Ionic liquid (IL) treatment of lignocellulosic biomass has been shown to disrupt the crystalline structure of cellulose thus potentially enabling the cellulose to be more readily hydrolysed to monosaccharides. Furthermore, conventional hydrolysis of lignocellulosic materials yields byproducts that are inhibitors for efficient fermentation of the monosaccharides. However, selective extraction of monosaccharides from an aqueous/IL phase into an organic phase utilizing a combination of boronic acids and quaternary amines has shown promise as a purification process. Hydrolysis of sugar cane bagasse immersed in an aqueous solution with IL (1-ethyl-3-methylimidazolium acetate) was conducted at different pH and temperature below 100 ºC. It was found that the use of a high concentration of hydrochloric acid to acidify the solution inhibited the hydrolysis of bagasse. At high pH (i.e. basic conditions), using sodium hydroxide, catalyst yields were reduced for total reducing sugars (TRS) due to the rapid degradation of the sugars formed. For purification trials, a supported liquid membrane (SLM) apparatus was constructed, whereby a synthetic solution containing xylose and glucose in an aqueous IL phase was transported across a membrane impregnated with phenyl boronic acid/Aliquat 336 to an aqueous phase. The transport rate of xylose was generally higher than that of glucose indicating that a SLM scheme may not only be useful for purifying sugars from undesirable toxic compounds, but also for fractionating sugars to improve fermentation efficiency.

Keywords: Biomass, bagasse, hydrolysis, monosaccharide, supported liquid membrane, purification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315
243 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
242 Wildfires Assessed by Remote Sense Images and Burned Land Monitoring

Authors: M. C. Proença

Abstract:

The tools described in this paper enable the location of burned areas where took place the annihilation of natural habitats and establishes a baseline for major changes in forest ecosystems during recovery. Moreover, the result allows the follow up of the surface fuel loading, allowing the evaluation and guidance of restoration measures to remote areas by phased time planning. This case study implements the evaluation of burned areas that suffered successive wildfires in Portugal mainland during the summer of 2017, killing more than 60 people. The goal is to show that this evaluation can be done with remote sense data free of charges in a simple laptop, with open-source software, describing the not-so-simple methodology step by step, to make it accessible for local workers in the areas attained, where the availability of information is essential for the immediate planning of mitigation measures, such as restoring road access, allocate funds for the recovery of human dwellings and assess further needs for restoration of the ecological system. Wildfires also devastate forest ecosystems having a direct impact on vegetation cover and killing or driving away the animal population, besides loss of all crops in rural areas that are essential as local resources. The economic interests are also attained, as the pinewood burned becomes useless for the noblest applications, so its value decreases, and resin extraction ends for several years.

Keywords: Image processing, remote sensing, wildfires, burned areas, SENTINEL-2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
241 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.

Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
240 Isolation and Screening of Fungal Strains for β-Galactosidase Production

Authors: Parmjit S. Panesar, Rupinder Kaur, Ram S. Singh

Abstract:

Enzymes are the biocatalysts which catalyze the biochemical processes and thus have a wide variety of applications in the industrial sector. β-Galactosidase (E.C. 3.2.1.23) also known as lactase, is one of the prime enzymes, which has significant potential in the dairy and food processing industries. It has the capability to catalyze both the hydrolytic reaction for the production of lactose hydrolyzed milk and transgalactosylation reaction for the synthesis of prebiotics such as lactulose and galactooligosaccharides. These prebiotics have various nutritional and technological benefits. Although, the enzyme is naturally present in almonds, peaches, apricots and other variety of fruits and animals, the extraction of enzyme from these sources increases the cost of enzyme. Therefore, focus has been shifted towards the production of low cost enzyme from the microorganisms such as bacteria, yeast and fungi. As compared to yeast and bacteria, fungal β-galactosidase is generally preferred as being extracellular and thermostable in nature. Keeping the above in view, the present study was carried out for the isolation of the β-galactosidase producing fungal strain from the food as well as the agricultural wastes. A total of more than 100 fungal cultures were examined for their potential in enzyme production. All the fungal strains were screened using X-gal and IPTG as inducers in the modified Czapek Dox Agar medium. Among the various isolated fungal strains, the strain exhibiting the highest enzyme activity was chosen for further phenotypic and genotypic characterization. The strain was identified as Rhizomucor pusillus on the basis of 5.8s RNA gene sequencing data.

Keywords: β-galactosidase, enzyme, fungus, isolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2553
239 District 10 in Tehran: Urban Transformation and the Survey Evidence of Loss in Place Attachment in High Rises

Authors: Roya Morad, W. Eirik Heintz

Abstract:

The identity of a neighborhood is inevitably shaped by the architecture and the people of that place. Conventionally the streets within each neighborhood served as a semi-public-private extension of the private living spaces. The street as a design element formed a hybrid condition that was neither totally public nor private, and it encouraged social interactions. Thus through creating a sense of community, one of the most basic human needs of belonging was achieved. Similar to major global cities, Tehran has undergone serious urbanization. Developing into a capital city of high rises has resulted in an increase in urban density. Although allocating more residential units in each neighborhood was a critical response to the population boom and the limited land area of the city, it also created a crisis in terms of social communication and place attachment. District 10 in Tehran is a neighborhood that has undergone the most urban transformation among the other 22 districts in the capital and currently has the highest population density. This paper will explore how the active streets in district 10 have changed into their current condition of high rises with a lack of meaningful social interactions amongst its inhabitants. A residential building can be thought of as a large group of people. One would think that as the number of people increases, the opportunities for social communications would increase as well. However, according to the survey, there is an indirect relationship between the two. As the number of people of a residential building increases, the quality of each acquaintance reduces, and the depth of relationships between people tends to decrease. This comes from the anonymity of being part of a crowd and the lack of social spaces characterized by most high-rise apartment buildings. Without a sense of community, the attachment to a neighborhood is decreased. This paper further explores how the neighborhood participates to fulfill ones need for social interaction and focuses on the qualitative aspects of alternative spaces that can redevelop the sense of place attachment within the community.

Keywords: High density, place attachment, social communication, street life, urban transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 471
238 Meaning Chasing Kiddies: Children-s Perception of Metaphors Used in Printed Advertisements

Authors: Asina Gülerarslan

Abstract:

Today-s children, who are born into a more colorful, more creative, more abstract and more accessible communication environment than their ancestors as a result of dizzying advances in technology, have an interesting capacity to perceive and make sense of the world. Millennium children, who live in an environment where all kinds of efforts by marketing communication are more intensive than ever are, from their early childhood on, subject to all kinds of persuasive messages. As regards advertising communication, it outperforms all the other marketing communication efforts in creating little consumer individuals and, as a result of processing of codes and signs, plays a significant part in building a world of seeing, thinking and understanding for children. Children who are raised with metaphorical expressions such as tales and riddles also meet that fast and effective meaning communication in advertisements. Children-s perception of metaphors, which help grasp the “product and its promise" both verbally and visually and facilitate association between them is the subject of this study. Stimulating and activating imagination, metaphors have unique advantages in promoting the product and its promise especially in regard to print advertisements, which have certain limitations. This study deals comparatively with both literal and metaphoric versions of print advertisements belonging to various product groups and attempts to discover to what extent advertisements are liked, recalled, perceived and are persuasive. The sample group of the study, which was conducted in two elementary schools situated in areas that had different socioeconomic features, consisted of children aged 12.

Keywords: Children, metaphor, perception, print advertisements, recall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
237 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams

Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin

Abstract:

In recent years, fire accidents have been steadily increased and the amount of property damage caused by the accidents has gradually raised. Damaging building structure, fire incidents bring about not only such property damage but also strength degradation and member deformation. As a result, the building structure undermines its structural ability. Examining the degradation and the deformation is very important because reusing the building is more economical than reconstruction. Therefore, engineers need to investigate the strength degradation and member deformation well, and make sure that they apply right rehabilitation methods. This study aims at evaluating deformation characteristics of fire damaged and rehabilitated normal strength concrete beams through both experiments and finite element analyses. For the experiments, control beams, fire damaged beams and rehabilitated beams are tested to examine deformation characteristics. Ten test beam specimens with compressive strength of 21MPa are fabricated and main test variables are selected as cover thickness of 40mm and 50mm and fire exposure time of 1 hour or 2 hours. After heating, fire damaged beams are air-recurred for 2 months and rehabilitated beams are repaired with polymeric cement mortar after being removed the fire damaged concrete cover. All beam specimens are tested under four points loading. FE analyses are executed to investigate the effects of main parameters applied to experimental study. Test results show that both maximum load and stiffness of the rehabilitated beams are higher than those of the fire damaged beams. In addition, predicted structural behaviors from the analyses also show good rehabilitation effect and the predicted load-deflection curves are similar to the experimental results. For the further, the proposed analytical method can be used to predict deformation characteristics of fire damaged and rehabilitated concrete beams without suffering from time and cost consuming of experimental process.

Keywords: Fire, Normal strength concrete, Rehabilitation, Reinforced concrete beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2357
236 Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications

Authors: G. Ramana Murthy, C. Senthilpari, P. Velrajkumar, Lim Tien Sze

Abstract:

In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.

Keywords: CSA Full Adder, Delay unit, IIR filter, Low-Power, PDP, Parametric Analysis, Propagation Delay, Throughput, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3777
235 Reutilization of Organic and Peat Soils by Deep Cement Mixing

Authors: Bee-Lin Tang, Ismail Bakar, Chee - Ming Chan

Abstract:

Limited infrastructure development on peats and organic soils is a serious geotechnical issues common to many countries of the world especially Malaysia which distributed 1.5 mill ha of those problematic soil. These soils have high water content and organic content which exhibit different mechanical properties and may also change chemically and biologically with time. Constructing structures on peaty ground involves the risk of ground failure and extreme settlement. Nowdays, much efforts need to be done in making peatlands usable for construction due to increased landuse. Deep mixing method employing cement as binders, is generally used as measure again peaty/ organic ground failure problem. Where the technique is widely adopted because it can improved ground considerably in a short period of time. An understanding of geotechnical properties as shear strength, stiffness and compressibility behavior of these soils was requires before continues construction on it. Therefore, 1- 1.5 meter peat soil sample from states of Johor and an organic soil from Melaka, Malaysia were investigated. Cement were added to the soil in the pre-mixing stage with water cement ratio at range 3.5,7,14,140 for peats and 5,10,30 for organic soils, essentially to modify the original soil textures and properties. The mixtures which in slurry form will pour to polyvinyl chloride (pvc) tube and cured at room temperature 250C for 7,14 and 28 days. Laboratory experiments were conducted including unconfined compressive strength and bender element , to monitor the improved strength and stiffness of the 'stabilised mixed soils'. In between, scanning electron miscroscopic (SEM) were observations to investigate changes in microstructures of stabilised soils and to evaluated hardening effect of a peat and organic soils stabilised cement. This preliminary effort indicated that pre-mixing peat and organic soils contributes in gaining soil strength while help the engineers to establish a new method for those problematic ground improvement in further practical and long term applications.

Keywords: peat soils, organic soils, cement stabilisation, strength, stiffness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3226
234 Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions

Authors: G. A. Khalid, M. D. Jones, R. Prabhu, A. Mason-Jones, W. Whittington, H. Bakhtiarydavijani, P. S. Theobald

Abstract:

Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.

Keywords: Finite element analysis, impact simulation, infant head trauma, material properties, post mortem human subjects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252
233 Object Identification with Color, Texture, and Object-Correlation in CBIR System

Authors: Awais Adnan, Muhammad Nawaz, Sajid Anwar, Tamleek Ali, Muhammad Ali

Abstract:

Needs of an efficient information retrieval in recent years in increased more then ever because of the frequent use of digital information in our life. We see a lot of work in the area of textual information but in multimedia information, we cannot find much progress. In text based information, new technology of data mining and data marts are now in working that were started from the basic concept of database some where in 1960. In image search and especially in image identification, computerized system at very initial stages. Even in the area of image search we cannot see much progress as in the case of text based search techniques. One main reason for this is the wide spread roots of image search where many area like artificial intelligence, statistics, image processing, pattern recognition play their role. Even human psychology and perception and cultural diversity also have their share for the design of a good and efficient image recognition and retrieval system. A new object based search technique is presented in this paper where object in the image are identified on the basis of their geometrical shapes and other features like color and texture where object-co-relation augments this search process. To be more focused on objects identification, simple images are selected for the work to reduce the role of segmentation in overall process however same technique can also be applied for other images.

Keywords: Object correlation, Geometrical shape, Color, texture, features, contents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994
232 Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator

Authors: Inder K. Purohit, M. Nazrul Islam, K. Vijayan Asari, Mohammad A. Karim

Abstract:

A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.

Keywords: Adaptive progressive thresholding, fringe adjusted filters, image segmentation, joint transform correlation, synthetic discriminant function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1183
231 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1377
230 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1343
229 AI-Based Approaches for Task Offloading, ‎Resource ‎Allocation and Service Placement of ‎IoT Applications: State of the Art

Authors: Fatima Z. Cherhabil, Mammar Sedrati, Sonia-Sabrina Bendib‎

Abstract:

In order to support the continued growth, critical latency of ‎IoT ‎applications and ‎various obstacles of traditional data centers, ‎Mobile Edge ‎Computing (MEC) has ‎emerged as a promising solution that extends the cloud data-processing and decision-making to edge devices. ‎By adopting a MEC structure, IoT applications could be executed ‎locally, on ‎an edge server, different fog nodes or distant cloud ‎data centers. However, we are ‎often ‎faced with wanting to optimize conflicting criteria such as ‎minimizing energy ‎consumption of limited local capabilities (in terms of CPU, RAM, storage, bandwidth) of mobile edge ‎devices and trying to ‎keep ‎high performance (reducing ‎response time, increasing throughput and service availability) ‎at the same ‎time‎. Achieving one goal may affect the other making Task Offloading (TO), ‎Resource Allocation (RA) and Service Placement (SP) complex ‎processes. ‎It is a nontrivial multi-objective optimization ‎problem ‎to study the trade-off between conflicting criteria. ‎The paper provides a survey on different TO, SP and RA recent Multi-‎Objective Optimization (MOO) approaches used in edge computing environments, particularly Artificial Intelligent (AI) ones, to satisfy various objectives, constraints and dynamic conditions related to IoT applications‎.

Keywords: Mobile Edge Computing, Multi-Objective Optimization, Artificial Intelligence ‎Approaches, Task Offloading, Resource Allocation, Service Placement‎.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 440
228 Seismic Behavior and Loss Assessment of High-Rise Buildings with Light Gauge Steel-Concrete Hybrid Structure

Authors: Bing Lu, Shuang Li, Hongyuan Zhou

Abstract:

The steel-concrete hybrid structure has been extensively employed in high-rise buildings and super high-rise buildings. The light gauge steel-concrete hybrid structure, including light gauge steel structure and concrete hybrid structure, is a type of steel-concrete hybrid structure, which possesses some advantages of light gauge steel structure and concrete hybrid structure. The seismic behavior and loss assessment of three high-rise buildings with three different concrete hybrid structures were investigated through finite element software. The three concrete hybrid structures are reinforced concrete column-steel beam (RC-S) hybrid structure, concrete-filled steel tube column-steel beam (CFST-S) hybrid structure, and tubed concrete column-steel beam (TC-S) hybrid structure. The nonlinear time-history analysis of three high-rise buildings under 80 earthquakes was carried out. After simulation, it indicated that the seismic performances of three high-rise buildings were superior. Under extremely rare earthquakes, the maximum inter-story drifts of three high-rise buildings are significantly lower than 1/50. The inter-story drift and floor acceleration of high-rise building with CFST-S hybrid structure were bigger than those of high-rise buildings with RC-S hybrid structure, and smaller than those of high-rise building with TC-S hybrid structure. Then, based on the time-history analysis results, the post-earthquake repair cost ratio and repair time of three high-rise buildings were predicted through an economic performance analysis method proposed in FEMA-P58 report. Under frequent earthquakes, basic earthquakes and rare earthquakes, the repair cost ratio and repair time of three high-rise buildings were less than 5% and 15 days, respectively. Under extremely rare earthquakes, the repair cost ratio and repair time of high-rise buildings with TC-S hybrid structure were the most among three high rise buildings. Due to the advantages of CFST-S hybrid structure, it could be extensively employed in high-rise buildings subjected to earthquake excitations.

Keywords: seismic behavior, loss assessment, light gauge steel, concrete hybrid structure, high-rise building, time-history analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444
227 Simulation and Design of an Aerospace Mission Powered by “Candy” Type Fuel Engines

Authors: N. Hernández Huertas, F. Rojas Mora

Abstract:

Sounding rockets are aerospace vehicles that were developed in the mid-20th century, and since then numerous investigations have been executed with the aim of innovate in this type of technology. However, the costs associated to the production of this type of technology are usually quite high, and therefore the challenge that exists today is to be able to reduce them. In this way, the main objective of this document is to present the design process of a Colombian aerospace mission capable to reach the thermosphere using low-cost “Candy” type solid fuel engines. This mission is the latest development of the Uniandes Aerospace Project (PUA for its Spanish acronym), which is an undergraduate and postgraduate research group at Universidad de los Andes (Bogotá, Colombia), dedicated to incurring in this type of technology. In this way, the investigations that have been carried out on Candy-type solid fuel, which is a compound of potassium nitrate and sorbitol, have allowed the production of engines powerful enough to reach space, and which represents a unique technological advance in Latin America and an important development in experimental rocketry. In this way, following the engineering iterative design methodology was possible to design a 2-stage sounding rocket with 1 solid fuel engine in each one, which was then simulated in RockSim V9.0 software and reached an apogee of approximately 150 km above sea level. Similarly, a speed equal to 5 Mach was obtained, which after performing a finite element analysis, it was shown that the rocket is strong enough to be able to withstand such speeds. Under these premises, it was demonstrated that it is possible to build a high-power aerospace mission at low cost, using Candy-type solid fuel engines. For this reason, the feasibility of carrying out similar missions clearly depends on the ability to replicate the engines in the best way, since as mentioned above, the design of the rocket is adequate to reach supersonic speeds and reach space. Consequently, with a team of at least 3 members, the mission can be obtained in less than 3 months. Therefore, when publishing this project, it is intended to be a reference for future research in this field and benefit the industry.

Keywords: Aerospace missions, candy type solid propellant engines, design of solid rockets, experimental rocketry, low costs missions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
226 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: Cooccurrence graph, entity relation graph, unstructured text, weighted distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 654
225 Increasing Fishery Economic Added Value through Post Fishing Program: Cold Storage Program

Authors: Indrijuli Magsari Putri, Dicky R. Munaf

Abstract:

The purpose of this paper is to guide the effort in improving the economic added value of Indonesian fisheries product through post fishing program, which is cold storage program. Indonesia's fisheries potential has been acknowledged by the world. FAO (2009) stated that Indonesia is one of the tenth highest producers of fishery products in the world. Based on BPS (Statistics Indonesia data), the national fisheries production in 2011 reached 5.714 million tons, which 93.55% came from marine fisheries and 6.45% from open waters. Indonesian territory consist of 2/3 of Indonesian waters, has given enormous benefits for Indonesia, especially fishermen. To improve the economic level of fishermen requires efforts to develop fisheries business unit. On of the efforts is by improving the quality of products which are marketed in the regional and international levels. It is certainly need the support of the existence of various fishery facilities (infrastructure to superstructure), one of which is cold storage. Given the many benefits of cold storage as a means of processing of fishery resources, Indonesia Maritime Security Coordinating Board (IMSCB) as one of the maritime institutions for maritime security and safety, has a program to empower the coastal community through encourages the development of cold storage in the middle and lower fishery business unit. The development of cold storage facilities which able to run its maximum role requires synergistic efforts of various parties.

Keywords: Cold Storage, Fish, Regulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2080
224 Antioxidant Properties, Ascorbic Acid and Total Carotenoid Values of Sweet and Hot Red Pepper Paste: A Traditional Food in Turkish Diet

Authors: Kubra Sayin, Derya Arslan

Abstract:

Red pepper (Capsicum annum L.) has long been recognized as a good source of antioxidants, being rich in ascorbic acid and other phytochemicals. In Turkish cuisine red pepper is sometimes consumed raw in salads and baked as a garnish, but its most wide consumption type is red pepper paste. The processing of red pepper into pepper paste includes various thermal treatment steps such as heating and pasteurizing. There are reports demonstrating an enhancement or reduction in antioxidant activity of vegetables after thermal treatment. So this study was conducted to investigate the total phenolic, ascorbic acid and total carotenoids as well as free radical scavenging activity of raw red pepper and various red pepper pastes obtainable on the market. The samples were analyzed for radical-scavenging activity (RSA) and total polyphenol (TP) content using 1,1-diphenyl-2-picrylhydrazyl (DPPH) and Folin-Ciocalteu methods, respectively. Total carotenoids and ascorbic acid contents were determined spectrophotometrically. Results suggest that hot pepper paste contained significantly (P<0.05) higher concentrations of TP than sweet pepper paste. However there is no significant (P>0.05) difference in RSA, ascorbic acid and total carotenoids content between sweet and hot red pepper paste products. It is concluded that the red pepper paste, that has a wide range of consumption in Turkish cuisine, presents a good dose of phenolic compounds and antioxidant capacity and it should be regarded as a functional food.

Keywords: Antioxidant properties, Red pepper paste, Total carotenoids, Total phenolic content.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2467
223 Malpractice, Even in Conditions of Compliance with the Rules of Dental Ethics

Authors: Saimir Heta, Kers Kapa, Rialda Xhizdari, Ilma Robo

Abstract:

Despite the existence of different dental specialties, the dentist-patient relationship is unique, in the very fact that the treatment is performed by one doctor and the patient identifies the malpractice presented as part of that doctor's practice; this is in complete contrast to cases of medical treatments where the patient can be presented to a team of doctors, to treat a specific pathology. The rules of dental ethics are almost the same as the rules of medical ethics. The appearance of dental malpractice affects exactly this two-party relationship, created on the basis of professionalism, without deviations in this direction, between the dentist and the patient, but with very narrow individual boundaries, compared to cases of medical malpractice. Malpractice can have different reasons for its appearance, starting from professional negligence, but also from the lack of professional knowledge of the dentist who undertakes the dental treatment. It should always be seen in perspective that we are not talking about the individual - the dentist who goes to work with the intention of harming their patients. Malpractice can also be a consequence of the impossibility, for anatomical or physiological reasons of the tooth under dental treatment, to realize the predetermined dental treatment plan. On the other hand, the dentist himself is an individual who can be affected by health conditions, or have vices that affect the systemic health of the dentist as an individual, which in these conditions can cause malpractice. So, depending on the reason that led to the appearance of malpractice, the method of treatment from a legal point of view also varies, for the dentist who committed the malpractice, evaluating the latter if the malpractice came under the conditions of applying the rules of dental ethics. The deviation from the predetermined dental plan is the minimum sign of malpractice and the latter should not be definitively related only to cases of difficult dental treatments. The identification of the reason for the appearance of malpractice is the initial element, which makes the difference in the way of its treatment, from a legal point of view, and the involvement of the dentist in the assessment of the malpractice committed, must be based on the legislation in force, which must be said to have their specific changes in different states. Malpractice should be referred to, or included in the lectures or in the continuing education of professionals, because it serves as a method of obtaining professional experience in order not to repeat the same thing several times, by different professionals.

Keywords: Dental ethics, malpractice, negligence, legal basis, continuing education, dental treatments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 79
222 Biotechonomy System Dynamics Modelling: Sustainability of Pellet Production

Authors: Andra Blumberga, Armands Gravelsins, Haralds Vigants, Dagnija Blumberga

Abstract:

The paper discovers biotechonomy development analysis by use of system dynamics modelling. The research is connected with investigations of biomass application for production of bioproducts with higher added value. The most popular bioresource is wood, and therefore, the main question today is about future development and eco-design of products. The paper emphasizes and evaluates energy sector which is open for use of wood logs, wood chips, wood pellets and so on. The main aim for this research study was to build a framework to analyse development perspectives for wood pellet production. To reach the goal, a system dynamics model of energy wood supplies, processing, and consumption is built. Production capacity, energy consumption, changes in energy and technology efficiency, required labour source, prices of wood, energy and labour are taken into account. Validation and verification tests with available data and information have been carried out and indicate that the model constitutes the dynamic hypothesis. It is found that the more is invested into pellets production, the higher the specific profit per production unit compared to wood logs and wood chips. As a result, wood chips production is decreasing dramatically and is replaced by wood pellets. The limiting factor for pellet industry growth is availability of wood sources. This is governed by felling limit set by the government based on sustainable forestry principles.

Keywords: Bioenergy, biotechonomy, system dynamics modelling, wood pellets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133
221 An Algorithm Proposed for FIR Filter Coefficients Representation

Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman

Abstract:

Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.

Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3142
220 Simulation of Static Frequency Converter for Synchronous Machine Operation and Investigation of Shaft Voltage

Authors: Arun Kumar Datta, M. A. Ansari, N. R. Mondal, B. V. Raghavaiah, Manisha Dubey, Shailendra Jain

Abstract:

This study is carried out to understand the effects of Static frequency converter (SFC) on large machine. SFC has a feature of four quadrant operations. By virtue of this it can be implemented to run a synchronous machine either as a motor or alternator. This dual mode operation helps a single machine to start & run as a motor and then it can be converted as an alternator whenever required. One such dual purpose machine is taken here for study. This machine is installed at a laboratory carrying out short circuit test on high power electrical equipment. SFC connected with this machine is broadly described in this paper. The same SFC has been modeled with the MATLAB/Simulink software. The data applied on this virtual model are the actual parameters from SFC and synchronous machine. After running the model, simulated machine voltage and current waveforms are validated with the real measurements. Processing of these waveforms is done through Fast Fourier Transformation (FFT) which reveals that the waveforms are not sinusoidal rather they contain number of harmonics. These harmonics are the major cause of generating shaft voltage. It is known that bearings of electrical machine are vulnerable to current flow through it due to shaft voltage. A general discussion on causes of shaft voltage in perspective with this machine is presented in this paper.

Keywords: Alternators, AC-DC power conversion, capacitive coupling, electric discharge machining, frequency converter, Fourier transforms, inductive coupling, simulation, Shaft voltage, synchronous machines, static excitation, thyristor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6000
219 Microencapsulation of Ascorbic Acid by Spray Drying: Influence of Process Conditions

Authors: Addion Nizori, Lan T.T. Bui, Darryl M. Small

Abstract:

Ascorbic acid (AA), commonly known as vitamin C, is essential for normal functioning of the body and maintenance of metabolic integrity. Among its various roles are as an antioxidant, a cofactor in collagen formation and other reactions, as well as reducing physical stress and maintenance of the immune system. Recent collaborative research between the Australian Defence Science and Technology Organisation (DSTO) in Scottsdale, Tasmania and RMIT University has sought to overcome the problems arising from the inherent instability of ascorbic acid during processing and storage of foods. The recent work has demonstrated the potential of microencapsulation by spray drying as a means to enhance retention. The purpose of this current study has been focused upon the influence of spray drying conditions on the properties of encapsulated ascorbic acid. The process was carried out according to a central composite design. Independent variables were: inlet temperature (80-120° C) and feed flow rate (7-14 mL/minute). Process yield, ascorbic acid loss, moisture content, water activity and particle size distribution were analysed as responses. The results have demonstrated the potential of microencapsulation by spray drying as a means to enhance retention. Vitamin retention, moisture content, water activity and process yield were influenced positively by inlet air temperature and negatively by feed flow rate.

Keywords: Microencapsulation, spray drying, ascorbic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4414
218 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

Authors: Jianwei Ma, Diriba Gemechu

Abstract:

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

Keywords: Anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, Split Bregman Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973