Search results for: loss distribution approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20270

Search results for: loss distribution approach

15260 Modelling Sudden Deaths from Myocardial Infarction and Stroke

Authors: Y. S. Yusoff, G. Streftaris, H. R Waters

Abstract:

Death within 30 days is an important factor to be looked into, as there is a significant risk of deaths immediately following or soon after, Myocardial Infarction (MI) or stroke. In this paper, we will model the deaths within 30 days following a Myocardial Infarction (MI) or stroke in the UK. We will see how the probabilities of sudden deaths from MI or stroke have changed over the period 1981-2000. We will model the sudden deaths using a Generalized Linear Model (GLM), fitted using the R statistical package, under a Binomial distribution for the number of sudden deaths. We parameterize our model using the extensive and detailed data from the Framingham Heart Study, adjusted to match UK rates. The results show that there is a reduction for the sudden deaths following a MI over time but no significant improvement for sudden deaths following a stroke.

Keywords: sudden deaths, myocardial infarction, stroke, ischemic heart disease

Procedia PDF Downloads 277
15259 Recycling of Sclareolide in the Crystallization Mother Liquid of Sclareolide by Adsorption and Chromatography

Authors: Xiang Li, Kui Chen, Bin Wu, Min Zhou

Abstract:

Sclareolide is made from sclareol by oxidiative synthesis and subsequent crystallization, while the crystallization mother liquor still contains 15%~30%wt of sclareolide to be reclaimed. With the reaction material of sclareol is provided as plant extract, many sorts of complex impurities exist in the mother liquor. Due to the difficulty in recycling sclareolide after solvent recovery, it is common practice for the factories to discard the mother liquor, which not only results in loss of sclareolide, but also contributes extra environmental burden. In this paper, a process based on adsorption and elution has been presented for recycling of sclareolide from mother liquor. After pretreatment of the crystallization mother liquor by HZ-845 resin to remove parts of impurities, sclareolide is adsorbed by HZ-816 resin. The HZ-816 resin loaded with sclareolide is then eluted by elution solvent. Finally, the eluent containing sclareolide is concentrated and fed into the crystallization step in the process. By adoption of the recycle from mother liquor, total yield of sclareolide increases from 86% to 90% with a stable purity of the final sclareolide products maintained.

Keywords: sclareolide, resin, adsorption, chromatography

Procedia PDF Downloads 218
15258 Analysis of Universal Mobile Telecommunications Service (UMTS) Planning Using High Altitude Platform Station (HAPS)

Authors: Yosika Dian Komala, Uke Kurniawan Usman, Yuyun Siti Rohmah

Abstract:

The enable technology fills up needs of high-speed data service is Universal Mobile Telecommunications Service (UMTS). UMTS has a data rate up to 2Mbps.UMTS terrestrial system has a coverage area about 1-2km. High Altitude Platform Station (HAPS) can be built by a macro cell that is able to serve the wider area. Design method of UMTS using HAPS is planning base on coverage and capacity. The planning method is simulated with 2.8.1 Atoll’s software. Determination of radius of the cell based on the coverage uses free space loss propagation model. While the capacity planning to determine the average cell through put is available with the Offered Bit Quantity (OBQ).

Keywords: UMTS, HAPS, coverage planning, capacity planning, signal level, Ec/Io, overlapping zone, throughput

Procedia PDF Downloads 620
15257 Applications of Probabilistic Interpolation via Orthogonal Matrices

Authors: Dariusz Jacek Jakóbczak

Abstract:

Mathematics and computer science are interested in methods of 2D curve interpolation and extrapolation using the set of key points (knots). A proposed method of Hurwitz- Radon Matrices (MHR) is such a method. This novel method is based on the family of Hurwitz-Radon (HR) matrices which possess columns composed of orthogonal vectors. Two-dimensional curve is interpolated via different functions as probability distribution functions: polynomial, sinus, cosine, tangent, cotangent, logarithm, exponent, arcsin, arccos, arctan, arcctg or power function, also inverse functions. It is shown how to build the orthogonal matrix operator and how to use it in a process of curve reconstruction.

Keywords: 2D data interpolation, hurwitz-radon matrices, MHR method, probabilistic modeling, curve extrapolation

Procedia PDF Downloads 513
15256 The BL-5D Model: The Development of a Model of Instructional Design for Blended Learning Activities

Authors: Damian Gordon, Paul Doyle, Anna Becevel, Júlia Vilafranca Molero, Cinta Gascon, Arianna Vitiello, Tina Baloh

Abstract:

It has long been recognized that the creation of any teaching content can be enhanced if the development process follows a pre-defined approach, which is often referred to as an instructional design methodology. These methodologies typically define a number of stages, or phases, that an educator should undertake to help ensure the quality of the final teaching content that is developed. In this paper, we present an instructional design methodology that is focused specifically on the introduction of blended resources into a heretofore bricks-and-mortar course. To achieve this, research was undertaken concerning a range of models of instructional design, as well as literature covering some of the key challenges and “pain points” of blending. Following this, our model, the BL-5D model, is presented, which incorporates some key questions at each stage of this five-stage methodology to guide the development process. Finally, a discussion of some of the key themes and issues that have been uncovered in this work is presented, as well as a template for a blended learning case study that emerged from this approach.

Keywords: blended learning, challenges of blended learning, design methodologies, instructional design

Procedia PDF Downloads 95
15255 Modelling Agricultural Commodity Price Volatility with Markov-Switching Regression, Single Regime GARCH and Markov-Switching GARCH Models: Empirical Evidence from South Africa

Authors: Yegnanew A. Shiferaw

Abstract:

Background: commodity price volatility originating from excessive commodity price fluctuation has been a global problem especially after the recent financial crises. Volatility is a measure of risk or uncertainty in financial analysis. It plays a vital role in risk management, portfolio management, and pricing equity. Objectives: the core objective of this paper is to examine the relationship between the prices of agricultural commodities with oil price, gas price, coal price and exchange rate (USD/Rand). In addition, the paper tries to fit an appropriate model that best describes the log return price volatility and estimate Value-at-Risk and expected shortfall. Data and methods: the data used in this study are the daily returns of agricultural commodity prices from 02 January 2007 to 31st October 2016. The data sets consists of the daily returns of agricultural commodity prices namely: white maize, yellow maize, wheat, sunflower, soya, corn, and sorghum. The paper applies the three-state Markov-switching (MS) regression, the standard single-regime GARCH and the two regime Markov-switching GARCH (MS-GARCH) models. Results: to choose the best fit model, the log-likelihood function, Akaike information criterion (AIC), Bayesian information criterion (BIC) and deviance information criterion (DIC) are employed under three distributions for innovations. The results indicate that: (i) the price of agricultural commodities was found to be significantly associated with the price of coal, price of natural gas, price of oil and exchange rate, (ii) for all agricultural commodities except sunflower, k=3 had higher log-likelihood values and lower AIC and BIC values. Thus, the three-state MS regression model outperformed the two-state MS regression model (iii) MS-GARCH(1,1) with generalized error distribution (ged) innovation performs best for white maize and yellow maize; MS-GARCH(1,1) with student-t distribution (std) innovation performs better for sorghum; MS-gjrGARCH(1,1) with ged innovation performs better for wheat, sunflower and soya and MS-GARCH(1,1) with std innovation performs better for corn. In conclusion, this paper provided a practical guide for modelling agricultural commodity prices by MS regression and MS-GARCH processes. This paper can be good as a reference when facing modelling agricultural commodity price problems.

Keywords: commodity prices, MS-GARCH model, MS regression model, South Africa, volatility

Procedia PDF Downloads 189
15254 Stator Short-Circuits Fault Diagnosis in Induction Motors Using Extended Park’s Vector Approach through the Discrete Wavelet Transform

Authors: K. Yahia, A. Ghoggal, A. Titaouine, S. E. Zouzou, F. Benchabane

Abstract:

This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.

Keywords: Induction Motors (IMs), Inter-turn Short-Circuits Diagnosis, Discrete Wavelet Transform (DWT), Current Park’s Vector Modulus (CPVM)

Procedia PDF Downloads 547
15253 Parametric Analysis of Water Lily Shaped Split Ring Resonator Loaded Fractal Monopole Antenna for Multiband Applications

Authors: C. Elavarasi, T. Shanmuganantham

Abstract:

A coplanar waveguide (CPW) feed is presented, and comprising a split ring resonator (SRR) loaded fractal with water lily shape is used for multi band applications. The impedance matching of the antenna is determined by the number of Koch curve fractal unit cells. The antenna is designed on a FR4 substrate with a permittivity of εr = 4.4 and size of 14 x 16 x 1.6 mm3 to generate multi resonant mode at 3.8 GHz covering S band, 8.68 GHz at X band, 13.96 GHz at Ku band, and 19.74 GHz at K band with reflection coefficient better than -10 dB. Simulation results show that the antenna exhibits the desired voltage standing wave ratio (VSWR) level and radiation patterns across the wide frequency range. The fundamental parameters of the antenna such as return loss, VSWR, good radiation pattern with reasonable gain across the operating bands are obtained.

Keywords: fractal, metamaterial, split ring resonator, waterlily shape

Procedia PDF Downloads 258
15252 The Effect of Flue Gas Condensation on the Exergy Efficiency and Economic Performance of a Waste-To-Energy Plant

Authors: Francis Chinweuba Eboh, Tobias Richards

Abstract:

In this study, a waste-to-energy combined heat and power plant under construction was modelled and simulated with the Aspen Plus software. The base case process plant was evaluated and compared when integrated with flue gas condensation (FGC) in order to find out the impact of the exergy efficiency and economic feasibility as well as the effect of overall system exergy losses and revenue generated in the investigated plant. The economic evaluations were carried out using the vendor cost data from Aspen process economic analyser. The results indicate that 4 % increase in the exergy efficiency and 29 % reduction in the exergy loss in the flue gas were obtained when the flue gas condensation was incorporated. Furthermore, with the integrated FGC, the net present values (NPV) and income generated in the base process plant were increased by 29 % and 10 % respectively after 20 years of operation.

Keywords: economic feasibility, exergy efficiency, exergy losses, flue gas condensation, waste-to-energy

Procedia PDF Downloads 176
15251 Thick Data Techniques for Identifying Abnormality in Video Frames for Wireless Capsule Endoscopy

Authors: Jinan Fiaidhi, Sabah Mohammed, Petros Zezos

Abstract:

Capsule endoscopy (CE) is an established noninvasive diagnostic modality in investigating small bowel disease. CE has a pivotal role in assessing patients with suspected bleeding or identifying evidence of active Crohn's disease in the small bowel. However, CE produces lengthy videos with at least eighty thousand frames, with a frequency rate of 2 frames per second. Gastroenterologists cannot dedicate 8 to 15 hours to reading the CE video frames to arrive at a diagnosis. This is why the issue of analyzing CE videos based on modern artificial intelligence techniques becomes a necessity. However, machine learning, including deep learning, has failed to report robust results because of the lack of large samples to train its neural nets. In this paper, we are describing a thick data approach that learns from a few anchor images. We are using sound datasets like KVASIR and CrohnIPI to filter candidate frames that include interesting anomalies in any CE video. We are identifying candidate frames based on feature extraction to provide representative measures of the anomaly, like the size of the anomaly and the color contrast compared to the image background, and later feed these features to a decision tree that can classify the candidate frames as having a condition like the Crohn's Disease. Our thick data approach reported accuracy of detecting Crohn's Disease based on the availability of ulcer areas at the candidate frames for KVASIR was 89.9% and for the CrohnIPI was 83.3%. We are continuing our research to fine-tune our approach by adding more thick data methods for enhancing diagnosis accuracy.

Keywords: thick data analytics, capsule endoscopy, Crohn’s disease, siamese neural network, decision tree

Procedia PDF Downloads 130
15250 The Problem of Suffering: Job, The Servant and Prophet of God

Authors: Barbara Pemberton

Abstract:

Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.

Keywords: suffering, Job, Qur'an, tanakh

Procedia PDF Downloads 164
15249 The AU Culture Platform Approach to Measure the Impact of Cultural Participation on Individuals

Authors: Sendy Ghirardi, Pau Rausell Köster

Abstract:

The European Commission increasingly pushes cultural policies towards social outcomes and local and regional authorities also call for culture-driven strategies for local development and prosperity and therefore, the measurement of cultural participation becomes increasingly more significant for evidence-based policy-making processes. Cultural participation involves various kinds of social and economic spillovers that combine social and economic objectives of value creation, including social sustainability and respect for human values. Traditionally, from the economic perspective, cultural consumption is measured by the value of financial transactions in purchasing, subscribing to, or renting cultural equipment and content, addressing the market value of cultural products and services. The main sources of data are the household spending survey and merchandise trade survey, among others. However, what characterizes the cultural consumption is that it is linked with the hedonistic and affective dimension rather than the utilitarian one. In fact, nowadays, more and more attention is being paid to the social and psychological dimensions of culture. The aim of this work is to present a comprehensive approach to measure the impacts of cultural participation and cultural users’ behaviour, combining both socio-psychological and economic approaches. The model combines contingent evaluation techniques with the individual characteristic and perception analysis of the cultural experiences to evaluate the cognitive, aesthetic, emotive and social impacts of cultural participation. To investigate the comprehensive approach to measure the impact of the cultural events on individuals, the research has been designed on the basis of prior theoretical development. A deep literature methodology has been done to develop the theoretical model applied to the web platform to measure the impacts of cultural experience on individuals. The developed framework aims to become a democratic tool for evaluating the services that cultural or policy institutions can adopt through the use of an interacting platform that produces big data benefiting academia, cultural management and policies. The Au Culture is a prototype based on an application that can be used on mobile phones or any other digital platform. The development of the AU Culture Platform has been funded by the Valencian Innovation Agency (Government of the Region of Valencia) and it is part of the Horizon 2020 project MESOC.

Keywords: comprehensive approach, cultural participation, economic dimension, socio-psychological dimension

Procedia PDF Downloads 104
15248 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques

Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy

Abstract:

Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.

Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model

Procedia PDF Downloads 52
15247 Integration of Microarray Data into a Genome-Scale Metabolic Model to Study Flux Distribution after Gene Knockout

Authors: Mona Heydari, Ehsan Motamedian, Seyed Abbas Shojaosadati

Abstract:

Prediction of perturbations after genetic manipulation (especially gene knockout) is one of the important challenges in systems biology. In this paper, a new algorithm is introduced that integrates microarray data into the metabolic model. The algorithm was used to study the change in the cell phenotype after knockout of Gss gene in Escherichia coli BW25113. Algorithm implementation indicated that gene deletion resulted in more activation of the metabolic network. Growth yield was more and less regulating gene were identified for mutant in comparison with the wild-type strain.

Keywords: metabolic network, gene knockout, flux balance analysis, microarray data, integration

Procedia PDF Downloads 567
15246 Blister Formation Mechanisms in Hot Rolling

Authors: Rebecca Dewfall, Mark Coleman, Vladimir Basabe

Abstract:

Oxide scale growth is an inevitable byproduct of the high temperature processing of steel. Blister is a phenomenon that occurs due to oxide growth, where high temperatures result in the swelling of surface scale, producing a bubble-like feature. Blisters can subsequently become embedded in the steel substrate during hot rolling in the finishing mill. This rolled in scale defect causes havoc within industry, not only with wear on machinery but loss of customer satisfaction, poor surface finish, loss of material, and profit. Even though blister is a highly prevalent issue, there is still much that is not known or understood. The classic iron oxidation system is a complex multiphase system formed of wustite, magnetite, and hematite, producing multi-layered scales. Each phase will have independent properties such as thermal coefficients, growth rate, and mechanical properties, etc. Furthermore, each additional alloying element will have different affinities for oxygen and different mobilities in the oxide phases so that oxide morphologies are specific to alloy chemistry. Therefore, blister regimes can be unique to each steel grade resulting in a diverse range of formation mechanisms. Laboratory conditions were selected to simulate industrial hot rolling with temperature ranges approximate to the formation of secondary and tertiary scales in the finishing mills. Samples with composition: 0.15Wt% C, 0.1Wt% Si, 0.86Wt% Mn, 0.036Wt% Al, and 0.028Wt% Cr, were oxidised in a thermo-gravimetric analyser (TGA), with an air velocity of 10litresmin-1, at temperaturesof 800°C, 850°C, 900°C, 1000°C, 1100°C, and 1200°C respectively. Samples were held at temperature in an argon atmosphere for 10minutes, then oxidised in air for 600s, 60s, 30s, 15s, and 4s, respectively. Oxide morphology and Blisters were characterised using EBSD, WDX, nanoindentation, FIB, and FEG-SEM imaging. Blister was found to have both a nucleation and growth process. During nucleation, the scale detaches from the substrate and blisters after a very short period, roughly 10s. The steel substrate is then exposed inside of the blister and further oxidised in the reducing atmosphere of the blister, however, the atmosphere within the blister is highly dependent upon the porosity of the blister crown. The blister crown was found to be consistently between 35-40um for all heating regimes, which supports the theory that the blister inflates, and the oxide then subsequently grows underneath. Upon heating, two modes of blistering were identified. In Mode 1 it was ascertained that the stresses produced by oxide growth will increase with increasing oxide thickness. Therefore, in Mode 1 the incubation time for blister formation is shortened by increasing temperature. In Mode 2 increase in temperature will result in oxide with a high ductility and high oxide porosity. The high oxide ductility and/or porosity accommodates for the intrinsic stresses from oxide growth. Thus Mode 2 is the inverse of Mode 1, and incubation time is increased with temperature. A new phenomenon was reported whereby blister formed exclusively through cooling at elevated temperatures above mode 2.

Keywords: FEG-SEM, nucleation, oxide morphology, surface defect

Procedia PDF Downloads 128
15245 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation

Authors: Samuel Ahamefula Mba

Abstract:

Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.

Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation

Procedia PDF Downloads 78
15244 Flocking Swarm of Robots Using Artificial Innate Immune System

Authors: Muneeb Ahmad, Ali Raza

Abstract:

A computational method inspired by the immune system (IS) is presented, leveraging its shared characteristics of robustness, fault tolerance, scalability, and adaptability with swarm intelligence. This method aims to showcase flocking behaviors in a swarm of robots (SR). The innate part of the IS offers a variety of reactive and probabilistic cell functions alongside its self-regulation mechanism which have been translated to enable swarming behaviors. Although, the research is specially focused on flocking behaviors in a variety of simulated environments using e-puck robots in a physics-based simulator (CoppeliaSim); the artificial innate immune system (AIIS) can exhibit other swarm behaviors as well. The effectiveness of the immuno-inspired approach has been established with extensive experimentations, for scalability and adaptability, using standard swarm benchmarks as well as the immunological regulatory functions (i.e., Dendritic Cells’ Maturity and Inflammation). The AIIS-based approach has proved to be a scalable and adaptive solution for emulating the flocking behavior of SR.

Keywords: artificial innate immune system, flocking swarm, immune system, swarm intelligence

Procedia PDF Downloads 85
15243 Software-Defined Networking: A New Approach to Fifth Generation Networks: Security Issues and Challenges Ahead

Authors: Behrooz Daneshmand

Abstract:

Software Defined Networking (SDN) is designed to meet the future needs of 5G mobile networks. The SDN architecture offers a new solution that involves separating the control plane from the data plane, which is usually paired together. Network functions traditionally performed on specific hardware can now be abstracted and virtualized on any device, and a centralized software-based administration approach is based on a central controller, facilitating the development of modern applications and services. These plan standards clear the way for a more adaptable, speedier, and more energetic network beneath computer program control compared with a conventional network. We accept SDN gives modern inquire about openings to security, and it can significantly affect network security research in numerous diverse ways. Subsequently, the SDN architecture engages systems to effectively screen activity and analyze threats to facilitate security approach modification and security benefit insertion. The segregation of the data planes and control and, be that as it may, opens security challenges, such as man-in-the-middle attacks (MIMA), denial of service (DoS) attacks, and immersion attacks. In this paper, we analyze security threats to each layer of SDN - application layer - southbound interfaces/northbound interfaces - controller layer and data layer. From a security point of see, the components that make up the SDN architecture have a few vulnerabilities, which may be abused by aggressors to perform noxious activities and hence influence the network and its administrations. Software-defined network assaults are shockingly a reality these days. In a nutshell, this paper highlights architectural weaknesses and develops attack vectors at each layer, which leads to conclusions about further progress in identifying the consequences of attacks and proposing mitigation strategies.

Keywords: software-defined networking, security, SDN, 5G/IMT-2020

Procedia PDF Downloads 84
15242 Exploring the Use of Universal Design for Learning to Support The Deaf Learners in Lesotho Secondary Schools: English Teachers Voice

Authors: Ntloyalefu Justinah, Fumane Khanare

Abstract:

English learning has been found as one of the prevalent areas of difficulty for Deaf learners. However, studies conducted indicated that this challenge experienced by Deaf learners is an upsetting concern globally as is blamed and hampered by various reasons such as the way English is taught at schools, lack of teachers ' skills and knowledge, therefore, impact negatively on their academic performance. Despite any difficulty in English learning, this language is considered nowadays as the key tool to an educational and occupational career especially in Lesotho. This paper, therefore, intends to contribute to the existing literature by providing the views of Lesotho English teachers, which focuses on how effectively Universal design for learning can be implemented to enhance the academic performance of Deaf learners in context of the English language classroom. The purpose of this study sought to explore the use of universal design for learning (UDL) to support Deaf learners in Lesotho Secondary schools. The present study is informed by interpretative paradigm and situated within a qualitative research approach. Ten participating English teachers from two inclusive schools were purposefully selected and telephonically interviewed to generate data for this study. The data were thematically analysed. The findings indicated that even though UDL is identified as highly proficient and promotes flexibility in teaching methods teachers reflect limited knowledge of the UDL approach. The findings further showed that UDL ensures education for all learners, including marginalised groups, such as learners with disabilities through different teaching strategies. This means that the findings signify the effective use of UDL for the better performance of the English language by Deaf learners (DLs). This aligns with literature that shows mobilizing English teachers as assets help DLs to be engaged and have control in their communities by defining and solving problems using their resources and connections to other networks for asset and exchange. The study, therefore, concludes that teachers acknowledge that even though they assume to be knowledgeable about the definition of UDL, they have a limited practice of the approach, thus they need to be equipped with some techniques and skills to apply for supporting the performance of DLs by using UDL approach in their English teaching. The researchers recommend the awareness of UDL principles by the ministry of Education and Training and teachers training Universities, as well as teachers training colleges, for them to include it in their curricula so that teachers could be properly trained on how to apply it in their teaching effectively

Keywords: deaf learners, Lesotho, support learning, universal design for learning

Procedia PDF Downloads 89
15241 Assessment of Procurement-Demand of Milk Plant Using Quality Control Tools: A Case Study

Authors: Jagdeep Singh, Prem Singh

Abstract:

Milk is considered as an essential and complete food. The present study was conducted at Milk Plant Mohali especially in reference to the procurement section where the cash inflow was maximum, with the objective to achieve higher productivity and reduce wastage of milk. In milk plant it was observed that during the month of Jan-2014 to March-2014 the average procurement of milk was Rs. 4, 19, 361 liter per month and cost of procurement of milk is Rs 35/- per liter. The total cost of procurement thereby equal to Rs. 1crore 46 lakh per month, but there was mismatch in procurement-production of milk, which leads to an average loss of Rs. 12, 94, 405 per month. To solve the procurement-production problem Quality Control Tools like brainstorming, Flow Chart, Cause effect diagram and Pareto analysis are applied wherever applicable. With the successful implementation of Quality Control tools an average saving of Rs. 4, 59, 445 per month is done.

Keywords: milk, procurement-demand, quality control tools,

Procedia PDF Downloads 510
15240 Effective Communication with the Czech Customers 50+ in the Financial Market

Authors: K. Matušínská, H. Starzyczná, M. Stoklasa

Abstract:

The paper deals with finding and describing of the effective marketing communication forms relating to the segment 50+ in the financial market in the Czech Republic. The segment 50+ can be seen as a great marketing potential in the future but unfortunately the Czech financial institutions haven´t still reacted enough to this fact and they haven´t prepared appropriate marketing programs for this customers´ segment. Demographic aging is a fundamental characteristic of the current European population evolution but the perspective of further population aging is more noticeable in the Czech Republic. This paper is based on data from one part of primary marketing research. Paper determinates the basic problem areas as well as definition of marketing communication in the financial market, defining the primary research problem, hypothesis and primary research methodology. Finally suitable marketing communication approach to selected sub-segment at age of 50-60 years is proposed according to marketing research findings.

Keywords: population aging in the Czech Republic, segment 50+, financial services, marketing communication, marketing research, marketing communication approach

Procedia PDF Downloads 424
15239 Investigation of Maxi̇mali̇st Approaches on Furni̇ture Desi̇gn

Authors: Emi̇ne Yuksel, Murat Kiliç, Onur Ülker

Abstract:

Although minimalism has been coming into being in the field of interior design for a long time, it also brought a wide range of reaction. The more simple and feeling of emptiness usage of minimalism in space and furniture design has been found extremely boring so far, as a reaction to minimalism, a movement of maximalism was emerged. Thus more extravagant, splendid, magnificent and comfortable design approach was substituted by the greatest, largest and the extreme. Thus, the philosophy of “less is bore” of minimalism was replaced by “less is more” giving rise to a new interpretation in the field of interior design. While maximalism reminded us the Victorian, Rococo, Arts and Crafts and Neoclassic styles in interior design, it drew attention to the furniture designs that covered all areas of space all in one. In this study, we search the effect of maximalist approach which was born as a reaction to minimalism in furniture. Firstly, it is explained how did the maximalism emerge and its philosophy, a literature investigation was scanned and investigated. As a research method, it is concerned with the investigation of studies undertaken by the pioneers of interior space designers and architects. The findings of this study have been evaluated in the conclusion section.

Keywords: furniture design, maximalism, minimalism, texture

Procedia PDF Downloads 305
15238 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 366
15237 Investigation of Mechanical and Tribological Property of Graphene Reinforced SS-316L Matrix Composite Prepared by Selective Laser Melting

Authors: Ajay Mandal, Jitendar Kumar Tiwari, N. Sathish, A. K. Srivastava

Abstract:

A fundamental investigation is performed on the development of graphene (Gr) reinforced stainless steel 316L (SS 316L) metal matrix composite via selective laser melting (SLM) in order to improve specific strength and wear resistance property of SS 316L. Firstly, SS 316L powder and graphene were mixed in a fixed ratio using low energy planetary ball milling. The milled powder is then subjected to the SLM process to fabricate composite samples at a laser power of 320 W and exposure time of 100 µs. The prepared composite was mechanically tested (hardness and tensile test) at ambient temperature, and obtained results indicate that the properties of the composite increased significantly with the addition of 0.2 wt. % Gr. Increment of about 25% (from 194 to 242 HV) and 70% (from 502 to 850 MPa) is obtained in hardness and yield strength of composite, respectively. Raman mapping and XRD were performed to see the distribution of Gr in the matrix and its effect on the formation of carbide, respectively. Results of Raman mapping show the uniform distribution of graphene inside the matrix. Electron back scatter diffraction (EBSD) map of the prepared composite was analyzed under FESEM in order to understand the microstructure and grain orientation. Due to thermal gradient, elongated grains were observed along the building direction, and grains get finer with the addition of Gr. Most of the mechanical components are subjected to several types of wear conditions. Therefore, it is very necessary to improve the wear property of the component, and hence apart from strength and hardness, a tribological property of composite was also measured under dry sliding condition. Solid lubrication property of Gr plays an important role during the sliding process due to which the wear rate of composite reduces up to 58%. Also, the surface roughness of worn surface reduces up to 70% as measured by 3D surface profilometry. Finally, it can be concluded that SLM is an efficient method of fabricating cutting edge metal matrix nano-composite having Gr like reinforcement, which was very difficult to fabricate through conventional manufacturing techniques. Prepared composite has superior mechanical and tribological properties and can be used for a wide variety of engineering applications. However, due to the unavailability of a considerable amount of literature in a similar domain, more experimental works need to perform, such as thermal property analysis, and is a part of ongoing study.

Keywords: selective laser melting, graphene, composite, mechanical property, tribological property

Procedia PDF Downloads 125
15236 Controlled Synthesis of CdSe Quantum Dots via Microwave-Enhanced Process: A Green Approach for Mass Production

Authors: Delele Worku Ayele, Bing-Joe Hwang

Abstract:

A method that does not employ hot injection techniques has been developed for the size-tunable synthesis of high-quality CdSe quantum dots (QDs) with a zinc blende structure. In this environmentally benign synthetic route, which uses relatively less toxic precursors, solvents, and capping ligands, CdSe QDs that absorb visible light are obtained. The size of the as-prepared CdSe QDs and, thus, their optical properties can be manipulated by changing the microwave reaction conditions. The QDs are characterized by XRD, TEM, UV-vis, FTIR, time-resolved fluorescence spectroscopy, and fluorescence spectrophotometry. In this approach, the reaction is conducted in open air and at a much lower temperature than in hot injection techniques. The use of microwaves in this process allows for a highly reproducible and effective synthesis protocol that is fully adaptable for mass production and can be easily employed to synthesize a variety of semiconductor QDs with the desired properties. The possible application of the as-prepared CdSe QDs has been also assessed using deposition on TiO2 films.

Keywords: average life time, CdSe QDs, microwave (MW), mass production oleic acid, Na2SeSO3

Procedia PDF Downloads 300
15235 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 178
15234 Recognition of Voice Commands of Mentor Robot in Noisy Environment Using Hidden Markov Model

Authors: Khenfer Koummich Fatma, Hendel Fatiha, Mesbahi Larbi

Abstract:

This paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a human-machine interface with a voice recognition system that allows the operator to teleoperate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands pronounced in two languages: French and Arabic. The obtained recognition rate is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equals 30 dB, in this case; the Arabic speech recognition rate is 69%, and the French speech recognition rate is 80%. This can be explained by the ability of phonetic context of each speech when the noise is added.

Keywords: Arabic speech recognition, Hidden Markov Model (HMM), HTK, noise, TIMIT, voice command

Procedia PDF Downloads 358
15233 Nanofluid based on Zinc Oxide/Ferric Oxide Nanocomposite as Additive for Geothermal Drilling Fluids

Authors: Anwaar O. Ali, Mahmoud Fathy Mubarak, Mahmoud Ibrahim Abdou, Hector Cano Esteban, Amany A. Aboulrous

Abstract:

Corrosion resistance and lubrication are crucial characteristics required for geothermal drilling fluids. In this study, a ZnO/Fe₂O₃ nanocomposite was fabricated and incorporated into the structure of Cetyltrimethylammonium bromide (CTAB). Several physicochemical techniques were utilized to analyze and describe the synthesized nanomaterials. The surface morphology of the composite was assessed through scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDAX). The corrosion inhibition capabilities of these materials were explored across various corrosive environments. The weight loss and electrochemical methods were utilized to determine the corrosion inhibition activity of the prepared nanomaterials. The results demonstrate a high level of protection achieved by the composite. Additionally, the lubricant coefficient and extreme pressure properties were evaluated.

Keywords: nanofluid, corrosion, geothermal drilling fluids, ZnO/Fe2O3

Procedia PDF Downloads 52
15232 Application of Life Cycle Assessment “LCA” Approach for a Sustainable Building Design under Specific Climate Conditions

Authors: Djeffal Asma, Zemmouri Noureddine

Abstract:

In order for building designer to be able to balance environmental concerns with other performance requirements, they need clear and concise information. For certain decisions during the design process, qualitative guidance, such as design checklists or guidelines information may not be sufficient for evaluating the environmental benefits between different building materials, products and designs. In this case, quantitative information, such as that generated through a life cycle assessment, provides the most value. LCA provides a systematic approach to evaluating the environmental impacts of a product or system over its entire life. In the case of buildings life cycle includes the extraction of raw materials, manufacturing, transporting and installing building components or products, operating and maintaining the building. By integrating LCA into building design process, designers can evaluate the life cycle impacts of building design, materials, components and systems and choose the combinations that reduce the building life cycle environmental impact. This article attempts to give an overview of the integration of LCA methodology in the context of building design, and focuses on the use of this methodology for environmental considerations concerning process design and optimization. A multiple case study was conducted in order to assess the benefits of the LCA as a decision making aid tool during the first stages of the building design under specific climate conditions of the North East region of Algeria. It is clear that the LCA methodology can help to assess and reduce the impact of a building design and components on the environment even if the process implementation is rather long and complicated and lacks of global approach including human factors. It is also demonstrated that using LCA as a multi objective optimization of building process will certainly facilitates the improvement in design and decision making for both new design and retrofit projects.

Keywords: life cycle assessment, buildings, sustainability, elementary schools, environmental impacts

Procedia PDF Downloads 531
15231 Economic Benefits in Community Based Forest Management from Users Perspective in Community Forestry, Nepal

Authors: Sovit Pujari

Abstract:

In the developing countries like Nepal, the community-based forest management approach has often been glorified as one of the best forest management alternatives to maximize the forest benefits. Though the approach has succeeded to construct a local level institution and conserve the forest biodiversity, how the local communities perceived about the forest benefits, the question always remains silent among the researchers and policy makers. The paper aims to explore the understanding of forest benefits from the perspective of local communities who used the forests in terms of institutional stability, equity and livelihood opportunity, and ecological stability. The paper revealed that the local communities have mixed understanding over the forest benefits. The institutional and ecological activities carried out by the local communities indicated that they have a better understanding over the forest benefits. However, inequality while sharing the forest benefits, low pricing strategy and its negative consequences in the valuation of forest products and limited livelihood opportunities indicating the poor understanding.

Keywords: community based forest management, low pricing strategy, forest benefits, livelihood opportunities, Nepal

Procedia PDF Downloads 330