Search results for: Simulation Model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9490

Search results for: Simulation Model

2980 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
2979 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation

Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone

Abstract:

Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.

Keywords: Digital Twin, Distribution System Operator, Electrical Distribution System, Smart Grid Controller, Supervisory Control and Data Acquisition System, Smart Recursive Load Flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 255
2978 Quality Fed-Batch Bioprocess Control A Case Study

Authors: Mihai Caramihai, Irina Severin

Abstract:

Bioprocesses are appreciated as difficult to control because their dynamic behavior is highly nonlinear and time varying, in particular, when they are operating in fed batch mode. The research objective of this study was to develop an appropriate control method for a complex bioprocess and to implement it on a laboratory plant. Hence, an intelligent control structure has been designed in order to produce biomass and to maximize the specific growth rate.

Keywords: Fed batch bioprocess, mass-balance model, fuzzy control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
2977 Analysis of Linked in Series Servers with Blocking, Priority Feedback Service and Threshold Policy

Authors: Walenty Oniszczuk

Abstract:

The use of buffer thresholds, blocking and adequate service strategies are well-known techniques for computer networks traffic congestion control. This motivates the study of series queues with blocking, feedback (service under Head of Line (HoL) priority discipline) and finite capacity buffers with thresholds. In this paper, the external traffic is modelled using the Poisson process and the service times have been modelled using the exponential distribution. We consider a three-station network with two finite buffers, for which a set of thresholds (tm1 and tm2) is defined. This computer network behaves as follows. A task, which finishes its service at station B, gets sent back to station A for re-processing with probability o. When the number of tasks in the second buffer exceeds a threshold tm2 and the number of task in the first buffer is less than tm1, the fed back task is served under HoL priority discipline. In opposite case, for fed backed tasks, “no two priority services in succession" procedure (preventing a possible overflow in the first buffer) is applied. Using an open Markovian queuing schema with blocking, priority feedback service and thresholds, a closed form cost-effective analytical solution is obtained. The model of servers linked in series is very accurate. It is derived directly from a twodimensional state graph and a set of steady-state equations, followed by calculations of main measures of effectiveness. Consequently, efficient expressions of the low computational cost are determined. Based on numerical experiments and collected results we conclude that the proposed model with blocking, feedback and thresholds can provide accurate performance estimates of linked in series networks.

Keywords: Blocking, Congestion control, Feedback, Markov chains, Performance evaluation, Threshold-base networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294
2976 Conformation Prediction of Human Plasmin and Docking on Gold Nanoparticle

Authors: Wen-Shyong Tzou, Chih-Ching Huang, Chin-Hwa Hu, Ying-Tsang Lo, Tun-Wen Pai, Chia-Yin Chiang, Chung-Hao Li, Hong-Jyuan Jian

Abstract:

Plasmin plays an important role in the human circulatory system owing to its catalytic ability of fibrinolysis. The immediate injection of plasmin in patients of strokes has intrigued many scientists to design vectors that can transport plasmin to the desired location in human body. Here we predict the structure of human plasmin and investigate the interaction of plasmin with the gold-nanoparticle. Because the crystal structure of plasminogen has been solved, we deleted N-terminal domain (Pan-apple domain) of plasminogen and generate a mimic of the active form of this enzyme (plasmin). We conducted a simulated annealing process on plasmin and discovered a very large conformation occurs. Kringle domains 1, 4 and 5 had been observed to leave its original location relative to the main body of the enzyme and the original doughnut shape of this enzyme has been transformed to a V-shaped by opening its two arms. This observation of conformational change is consistent with the experimental results of neutron scattering and centrifugation. We subsequently docked the plasmin on the simulated gold surface to predict their interaction. The V-shaped plasmin could utilize its Kringle domain and catalytic domain to contact the gold surface. Our findings not only reveal the flexibility of plasmin structure but also provide a guide for the design of a plasmin-gold nanoparticle.

Keywords: Docking, gold nanoparticle, molecular simulation, plasmin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2432
2975 A Novel Application of Network Equivalencing Method in Time Domain to Precise Calculation of Dead Time in Power Transmission Title

Authors: J. Moshtagh, L. Eslami

Abstract:

Various studies have showed that about 90% of single line to ground faults occurred on High voltage transmission lines have transient nature. This type of faults is cleared by temporary outage (by the single phase auto-reclosure). The interval between opening and reclosing of the faulted phase circuit breakers is named “Dead Time” that is varying about several hundred milliseconds. For adjustment of traditional single phase auto-reclosures that usually are not intelligent, it is necessary to calculate the dead time in the off-line condition precisely. If the dead time used in adjustment of single phase auto-reclosure is less than the real dead time, the reclosing of circuit breakers threats the power systems seriously. So in this paper a novel approach for precise calculation of dead time in power transmission lines based on the network equivalencing in time domain is presented. This approach has extremely higher precision in comparison with the traditional method based on Thevenin equivalent circuit. For comparison between the proposed approach in this paper and the traditional method, a comprehensive simulation by EMTP-ATP is performed on an extensive power network.

Keywords: Dead Time, Network Equivalencing, High Voltage Transmission Lines, Single Phase Auto-Reclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
2974 Some Properties of IF Rough Relational Algebraic Operators in Medical Databases

Authors: Chhaya Gangwal, R. N. Bhaumik, Shishir Kumar

Abstract:

Some properties of Intuitionistic Fuzzy (IF) rough relational algebraic operators under an IF rough relational data model are investigated and illustrated using diabetes and heart disease databases. These properties are important and desirable for processing queries in an effective and efficient manner.

 

Keywords: IF Set, Rough Set, IF Rough Relational Database, IF rough Relational Operators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
2973 Why Are Entrepreneurs Resistant to E-tools?

Authors: D. Ščeulovs, E. Gaile-Sarkane

Abstract:

Latvia is the fourth in the world by means of broadband internet speed. The total number of internet users in Latvia exceeds 70% of its population. The number of active mailboxes of the local internet e-mail service Inbox.lv accounts for 68% of the population and 97.6% of the total number of internet users. The Latvian portal Draugiem.lv is a phenomenon of social media, because 58.4 % of the population and 83.5% of internet users use it. A majority of Latvian company profiles are available on social networks, the most popular being Twitter.com. These and other parameters prove the fact consumers and companies are actively using the Internet. 

However, after the authors in a number of studies analyzed how enterprises are employing the e-environment, namely, e-environment tools, they arrived to the conclusions that are not as flattering as the aforementioned statistics. There is an obvious contradiction between the statistical data and the actual studies. As a result, the authors have posed a question: Why are entrepreneurs resistant to e-tools? In order to answer this question, the authors have addressed the Technology Acceptance Model (TAM). The authors analyzed each phase and determined several factors affecting the use of e-environment, reaching the main conclusion that entrepreneurs do not have a sufficient level of e-literacy (digital literacy). 

The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistic method, factor analysis in SPSS 20  environment etc. 

The theoretical and methodological background of the research is formed by, scientific researches and publications, that from the mass media and professional literature, statistical information from legal institutions as well as information collected by the author during the survey.

Keywords: E-environment, e-environment tools, technology acceptance model, factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
2972 Investigations on the Influence of Optimized Charge Air Cooling for a Diesel Passenger Car

Authors: Christian Doppler, Gernot Hirschl, Gerhard Zsiga

Abstract:

Starting in 2020, an EU-wide CO2-limitation of 95 g/km is scheduled for the average of an OEMs passenger car fleet. Taking that into consideration additional improvement measures of the Diesel cycle are necessary in order to reduce fuel consumption and emissions while boosting, or at the least, keeping performance values at the same time. The present article deals with the possibilities of an optimized air/water charge air cooler, also called iCAC (indirect Charge Air Cooler) for a Diesel passenger car amongst extreme-boundary conditions. In this context, the precise objective was to show the impact of improved intercooling with reference to the engine working process (fuel consumption and NOx-emissions). Several extremeboundaries - e.g. varying ambient temperatures or mountainous routes - that will become very important in the near future regarding RDE (Real Driving emissions) were subject of the investigation. With the introduction of RDE in 2017 (EU6c measure), the controversial NEDC (New European Driving Cycle) will belong to the past and the OEMs will have to avoid harmful emissions in any conceivable real life situation. This is certainly going to lead to optimization-measurements at the powertrain, which again is going to make the implementation of iCACs, presently solely used for the premium class, more and more attractive for compact class cars. The investigations showed a benefit in FC between 1 and 3% for the iCAC in real world conditions.

Keywords: Air/Water-Charge Air Cooler, Co-Simulation, Diesel Working Process, EURO VI Fuel Consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2905
2971 Experimental and Theoretical Investigation of Rough Rice Drying in Infrared-assisted Hot Air Dryer Using Artificial Neural Network

Authors: D. Zare, H. Naderi, A. A. Jafari

Abstract:

Drying characteristics of rough rice (variety of lenjan) with an initial moisture content of 25% dry basis (db) was studied in a hot air dryer assisted by infrared heating. Three arrival air temperatures (30, 40 and 500C) and four infrared radiation intensities (0, 0.2 , 0.4 and 0.6 W/cm2) and three arrival air speeds (0.1, 0.15 and 0.2 m.s-1) were studied. Bending strength of brown rice kernel, percentage of cracked kernels and time of drying were measured and evaluated. The results showed that increasing the drying arrival air temperature and radiation intensity of infrared resulted decrease in drying time. High bending strength and low percentage of cracked kernel was obtained when paddy was dried by hot air assisted infrared dryer. Between this factors and their interactive effect were a significant difference (p<0.01). An intensity level of 0.2 W/cm2 was found to be optimum for radiation drying. Furthermore, in the present study, the application of Artificial Neural Network (ANN) for predicting the moisture content during drying (output parameter for ANN modeling) was investigated. Infrared Radiation intensity, drying air temperature, arrival air speed and drying time were considered as input parameters for the model. An ANN model with two hidden layers with 8 and 14 neurons were selected for studying the influence of transfer functions and training algorithms. The results revealed that a network with the Tansig (hyperbolic tangent sigmoid) transfer function and trainlm (Levenberg-Marquardt) back propagation algorithm made the most accurate predictions for the paddy drying system. Mean square error (MSE) was calculated and found that the random errors were within and acceptable range of ±5% with coefficient of determination (R2) of 99%.

Keywords: Rough rice, Infrared-hot air, Artificial Neural Network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826
2970 Development of EPID-based Real time Dose Verification for Dynamic IMRT

Authors: Todsaporn Fuangrod, Daryl J. O'Connor, Boyd MC McCurdy, Peter B. Greer

Abstract:

An electronic portal image device (EPID) has become a method of patient-specific IMRT dose verification for radiotherapy. Research studies have focused on pre and post-treatment verification, however, there are currently no interventional procedures using EPID dosimetry that measure the dose in real time as a mechanism to ensure that overdoses do not occur and underdoses are detected as soon as is practically possible. As a result, an EPID-based real time dose verification system for dynamic IMRT was developed and was implemented with MATLAB/Simulink. The EPID image acquisition was set to continuous acquisition mode at 1.4 images per second. The system defined the time constraint gap, or execution gap at the image acquisition time, so that every calculation must be completed before the next image capture is completed. In addition, the <=-evaluation method was used for dose comparison, with two types of comparison processes; individual image and cumulative dose comparison monitored. The outputs of the system are the <=-map, the percent of <=<1, and mean-<= versus time, all in real time. Two strategies were used to test the system, including an error detection test and a clinical data test. The system can monitor the actual dose delivery compared with the treatment plan data or previous treatment dose delivery that means a radiation therapist is able to switch off the machine when the error is detected.

Keywords: real-time dose verification, EPID dosimetry, simulation, dynamic IMRT

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
2969 The Application of Real Options to Capital Budgeting

Authors: George Yungchih Wang

Abstract:

Real options theory suggests that managerial flexibility embedded within irreversible investments can account for a significant value in project valuation. Although the argument has become the dominant focus of capital investment theory over decades, yet recent survey literature in capital budgeting indicates that corporate practitioners still do not explicitly apply real options in investment decisions. In this paper, we explore how real options decision criteria can be transformed into equivalent capital budgeting criteria under the consideration of uncertainty, assuming that underlying stochastic process follows a geometric Brownian motion (GBM), a mixed diffusion-jump (MX), or a mean-reverting process (MR). These equivalent valuation techniques can be readily decomposed into conventional investment rules and “option impacts", the latter of which describe the impacts on optimal investment rules with the option value considered. Based on numerical analysis and Monte Carlo simulation, three major findings are derived. First, it is shown that real options could be successfully integrated into the mindset of conventional capital budgeting. Second, the inclusion of option impacts tends to delay investment. It is indicated that the delay effect is the most significant under a GBM process and the least significant under a MR process. Third, it is optimal to adopt the new capital budgeting criteria in investment decision-making and adopting a suboptimal investment rule without considering real options could lead to a substantial loss in value.

Keywords: real options, capital budgeting, geometric Brownianmotion, mixed diffusion-jump, mean-reverting process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2770
2968 LAYMOD; A Layered and Modular Platform for CAx Collaboration Management and Supporting Product data Integration based on STEP Standard

Authors: Omid F. Valilai, Mahmoud Houshmand

Abstract:

Nowadays companies strive to survive in a competitive global environment. To speed up product development/modifications, it is suggested to adopt a collaborative product development approach. However, despite the advantages of new IT improvements still many CAx systems work separately and locally. Collaborative design and manufacture requires a product information model that supports related CAx product data models. To solve this problem many solutions are proposed, which the most successful one is adopting the STEP standard as a product data model to develop a collaborative CAx platform. However, the improvement of the STEP-s Application Protocols (APs) over the time, huge number of STEP AP-s and cc-s, the high costs of implementation, costly process for conversion of older CAx software files to the STEP neutral file format; and lack of STEP knowledge, that usually slows down the implementation of the STEP standard in collaborative data exchange, management and integration should be considered. In this paper the requirements for a successful collaborative CAx system is discussed. The STEP standard capability for product data integration and its shortcomings as well as the dominant platforms for supporting CAx collaboration management and product data integration are reviewed. Finally a platform named LAYMOD to fulfil the requirements of CAx collaborative environment and integrating the product data is proposed. The platform is a layered platform to enable global collaboration among different CAx software packages/developers. It also adopts the STEP modular architecture and the XML data structures to enable collaboration between CAx software packages as well as overcoming the STEP standard limitations. The architecture and procedures of LAYMOD platform to manage collaboration and avoid contradicts in product data integration are introduced.

Keywords: CAx, Collaboration management, STEP applicationmodules, STEP standard, XML data structures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
2967 Fast Painting with Different Colors Using Cross Correlation in the Frequency Domain

Authors: Hazem M. El-Bakry

Abstract:

In this paper, a new technique for fast painting with different colors is presented. The idea of painting relies on applying masks with different colors to the background. Fast painting is achieved by applying these masks in the frequency domain instead of spatial (time) domain. New colors can be generated automatically as a result from the cross correlation operation. This idea was applied successfully for faster specific data (face, object, pattern, and code) detection using neural algorithms. Here, instead of performing cross correlation between the input input data (e.g., image, or a stream of sequential data) and the weights of neural networks, the cross correlation is performed between the colored masks and the background. Furthermore, this approach is developed to reduce the computation steps required by the painting operation. The principle of divide and conquer strategy is applied through background decomposition. Each background is divided into small in size subbackgrounds and then each sub-background is processed separately by using a single faster painting algorithm. Moreover, the fastest painting is achieved by using parallel processing techniques to paint the resulting sub-backgrounds using the same number of faster painting algorithms. In contrast to using only faster painting algorithm, the speed up ratio is increased with the size of the background when using faster painting algorithm and background decomposition. Simulation results show that painting in the frequency domain is faster than that in the spatial domain.

Keywords: Fast Painting, Cross Correlation, Frequency Domain, Parallel Processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
2966 A Mixed-Methods Approach to Developing and Evaluating an SME Business Support Model for Innovation in Rural England

Authors: Steve Fish, Chris Lambert

Abstract:

Cumbria is a geo-political county in Northwest England within which the Lake District National Park, a UNESCO World Heritage site is located. Whilst the area has a formidable reputation for natural beauty and historic assets, the innovation ecosystem is described as ‘patchy’ for a number of reasons. The county is one of the largest in England by area and is sparsely populated. This paper describes the needs, development and delivery of an SME business-support programme funded by the European Regional Development Fund, Lancaster University and the University of Cumbria. The Cumbria Innovations Platform (CUSP) Project has been designed to respond to the nuanced needs of SMEs in this locale, whilst promoting the adoption of research and innovation. CUSP utilizes a funnel method to support rural businesses with access to university innovation intervention. CUSP has been built on a three-tier model: Communicate, Collaborate and Create. The paper describes this project in detail and presents results in terms of output indicators achieved, a beneficiary telephone survey and wider economic forecasts. From a pragmatic point-of-view, the paper provides experiences and reflections of those people who are delivering and evaluating knowledge exchange. The authors discuss some of the benefits, challenges and implications for both policy makers and practitioners. Finally, the paper aims to serve as an invitation to others who may consider adopting a similar method of university-industry collaboration in their own region.

Keywords: Regional business support, rural business support, university-industry collaboration, collaborative R&D, SMEs, knowledge exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509
2965 An ACO Based Algorithm for Distribution Networks Including Dispersed Generations

Authors: B. Bahmani Firouzi, T. Niknam, M. Nayeripour

Abstract:

With Power system movement toward restructuring along with factors such as life environment pollution, problems of transmission expansion and with advancement in construction technology of small generation units, it is expected that small units like wind turbines, fuel cells, photovoltaic, ... that most of the time connect to the distribution networks play a very essential role in electric power industry. With increase in developing usage of small generation units, management of distribution networks should be reviewed. The target of this paper is to present a new method for optimal management of active and reactive power in distribution networks with regard to costs pertaining to various types of dispersed generations, capacitors and cost of electric energy achieved from network. In other words, in this method it-s endeavored to select optimal sources of active and reactive power generation and controlling equipments such as dispersed generations, capacitors, under load tapchanger transformers and substations in a way that firstly costs in relation to them are minimized and secondly technical and physical constraints are regarded. Because the optimal management of distribution networks is an optimization problem with continuous and discrete variables, the new evolutionary method based on Ant Colony Algorithm has been applied. The simulation results of the method tested on two cases containing 23 and 34 buses exist and will be shown at later sections.

Keywords: Distributed Generation, Optimal Operation Management of distribution networks, Ant Colony Optimization(ACO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
2964 DC Bus Voltage Regulator for Renewable Energy Based Microgrid Application

Authors: Bakari M. M. Mwinyiwiwa

Abstract:

Renewable Energy based microgrids are being considered to provide electricity for the expanding energy demand in the grid distribution network and grid isolated areas. The technical challenges associated with the operation and controls are immense. Electricity generation by Renewable Energy Sources is of stochastic nature such that there is a demand for regulation of voltage output in order to satisfy the standard loads’ requirements. In a renewable energy based microgrid, the energy sources give stochastically variable magnitude AC or DC voltages. AC voltage regulation of micro and mini sources pose practical challenges as well as unbearable costs. It is therefore practically and economically viable to convert the voltage outputs from stochastic AC and DC voltage sources to constant DC voltage to satisfy various DC loads including inverters which ultimately feed AC loads. This paper presents results obtained from SEPIC converter based DC bus voltage regulator as a case study for renewable energy microgrid application. Real-Time Simulation results show that upon appropriate choice of controller parameters for control of the SEPIC converter, the output DC bus voltage can be kept constant regardless of wide range of voltage variations of the source. This feature is particularly important in the situation that multiple renewable sources are to be integrated to supply a microgrid under main grid integration or isolated modes of operation.

Keywords: DC Voltage Regulator, microgrid, multisource, Renewable Energy, SEPIC Converter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4311
2963 Modelling of a Biomechanical Vertebral System for Seat Ejection in Aircrafts Using Lumped Mass Approach

Authors: R. Unnikrishnan, K. Shankar

Abstract:

In the case of high-speed fighter aircrafts, seat ejection is designed mainly for the safety of the pilot in case of an emergency. Strong windblast due to the high velocity of flight is one main difficulty in clearing the tail of the aircraft. Excessive G-forces generated, immobilizes the pilot from escape. In most of the cases, seats are ejected out of the aircrafts by explosives or by rocket motors attached to the bottom of the seat. Ejection forces are primarily in the vertical direction with the objective of attaining the maximum possible velocity in a specified period of time. The safe ejection parameters are studied to estimate the critical time of ejection for various geometries and velocities of flight. An equivalent analytical 2-dimensional biomechanical model of the human spine has been modelled consisting of vertebrae and intervertebral discs with a lumped mass approach. The 24 vertebrae, which consists of the cervical, thoracic and lumbar regions, in addition to the head mass and the pelvis has been designed as 26 rigid structures and the intervertebral discs are assumed as 25 flexible joint structures. The rigid structures are modelled as mass elements and the flexible joints as spring and damper elements. Here, the motions are restricted only in the mid-sagittal plane to form a 26 degree of freedom system. The equations of motions are derived for translational movement of the spinal column. An ejection force with a linearly increasing acceleration profile is applied as vertical base excitation on to the pelvis. The dynamic vibrational response of each vertebra in time-domain is estimated.

Keywords: Biomechanical model, lumped mass, seat ejection, vibrational response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1051
2962 Profile Controlled Gold Nanostructures Fabricated by Nanosphere Lithography for Localized Surface Plasmon Resonance

Authors: Xiaodong Zhou, Nan Zhang

Abstract:

Localized surface plasmon resonance (LSPR) is the coherent oscillation of conductive electrons confined in noble metallic nanoparticles excited by electromagnetic radiation, and nanosphere lithography (NSL) is one of the cost-effective methods to fabricate metal nanostructures for LSPR. NSL can be categorized into two major groups: dispersed NSL and closely pack NSL. In recent years, gold nanocrescents and gold nanoholes with vertical sidewalls fabricated by dispersed NSL, and silver nanotriangles and gold nanocaps on silica nanospheres fabricated by closely pack NSL, have been reported for LSPR biosensing. This paper introduces several novel gold nanostructures fabricated by NSL in LSPR applications, including 3D nanostructures obtained by evaporating gold obliquely on dispersed nanospheres, nanoholes with slant sidewalls, and patchy nanoparticles on closely packed nanospheres, all of which render satisfactory sensitivity for LSPR sensing. Since the LSPR spectrum is very sensitive to the shape of the metal nanostructures, formulas are derived and software is developed for calculating the profiles of the obtainable metal nanostructures by NSL, for different nanosphere masks with different fabrication conditions. The simulated profiles coincide well with the profiles of the fabricated gold nanostructures observed under scanning electron microscope (SEM) and atomic force microscope (AFM), which proves that the software is a useful tool for the process design of different LSPR nanostructures.

Keywords: Nanosphere lithography, localized surface plasmonresonance, biosensor, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
2961 UML Modeling for Instruction Pipeline Design

Authors: Vipin Saxena, Deepa Raj

Abstract:

Unified Modeling language (UML) is one of the important modeling languages used for the visual representation of the research problem. In the present paper, UML model is designed for the Instruction pipeline which is used for the evaluation of the instructions of software programs. The class and sequence diagrams are designed & performance is evaluated for instructions of a sample program through a case study.

Keywords: UML, Instruction Pipeline, Class Diagram &Sequence Diagram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2536
2960 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modeling and Solving

Authors: Yasin Tadayonrad, Alassane Ballé Ndiaye

Abstract:

Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading/unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is the loading/unloading capacity in each source/destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods (FMCG) industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on Python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.

Keywords: Supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
2959 Discrete Element Modeling of the Effect of Particle Shape on Creep Behavior of Rockfills

Authors: Yunjia Wang, Zhihong Zhao, Erxiang Song

Abstract:

Rockfills are widely used in civil engineering, such as dams, railways, and airport foundations in mountain areas. A significant long-term post-construction settlement may affect the serviceability or even the safety of rockfill infrastructures. The creep behavior of rockfills is influenced by a number of factors, such as particle size, strength and shape, water condition and stress level. However, the effect of particle shape on rockfill creep still remains poorly understood, which deserves a careful investigation. Particle-based discrete element method (DEM) was used to simulate the creep behavior of rockfills under different boundary conditions. Both angular and rounded particles were considered in this numerical study, in order to investigate the influence of particle shape. The preliminary results showed that angular particles experience more breakages and larger creep strains under one-dimensional compression than rounded particles. On the contrary, larger creep strains were observed in he rounded specimens in the direct shear test. The mechanism responsible for this difference is that the possibility of the existence of key particle in rounded particles is higher than that in angular particles. The above simulations demonstrate that the influence of particle shape on the creep behavior of rockfills can be simulated by DEM properly. The method of DEM simulation may facilitate our understanding of deformation properties of rockfill materials.

Keywords: Rockfills, creep behavior, particle crushing, discrete element method, boundary conditions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1081
2958 Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator

Authors: Inder K. Purohit, M. Nazrul Islam, K. Vijayan Asari, Mohammad A. Karim

Abstract:

A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.

Keywords: Adaptive progressive thresholding, fringe adjusted filters, image segmentation, joint transform correlation, synthetic discriminant function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1208
2957 Bioprocess Intelligent Control: A Case Study

Authors: Mihai Caramihai Ana A Chirvase, Irina Severin

Abstract:

Bioprocesses are appreciated as difficult to control because their dynamic behavior is highly nonlinear and time varying, in particular, when they are operating in fed batch mode. The research objective of this study was to develop an appropriate control method for a complex bioprocess and to implement it on a laboratory plant. Hence, an intelligent control structure has been designed in order to produce biomass and to maximize the specific growth rate.

Keywords: Fed batch bioprocess, mass-balance model, fuzzy control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
2956 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167
2955 Spacecraft Neural Network Control System Design using FPGA

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.

Keywords: Spacecraft, neural network, FPGA, VHDL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3010
2954 Churn Prediction: Does Technology Matter?

Authors: John Hadden, Ashutosh Tiwari, Rajkumar Roy, Dymitr Ruta

Abstract:

The aim of this paper is to identify the most suitable model for churn prediction based on three different techniques. The paper identifies the variables that affect churn in reverence of customer complaints data and provides a comparative analysis of neural networks, regression trees and regression in their capabilities of predicting customer churn.

Keywords: Churn, Decision Trees, Neural Networks, Regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3302
2953 Modeling of Radiative Heat Transfer in 2D Complex Heat Recuperator of Biomass Pyrolysis Furnace: A Study of Baffles Shadow and Soot Volume Fraction Effects

Authors: Mohamed Ammar Abbassi, Kamel Guedri, Mohamed Naceur Borjini, Kamel Halouani, Belkacem Zeghmati

Abstract:

The radiative heat transfer problem is investigated numerically for 2D complex geometry biomass pyrolysis reactor composed of two pyrolysis chambers and a heat recuperator. The fumes are a mixture of carbon dioxide and water vapor charged with absorbing and scattering particles and soot. In order to increase gases residence time and heat transfer, the heat recuperator is provided with many inclined, vertical, horizontal, diffuse and grey baffles of finite thickness and has a complex geometry. The Finite Volume Method (FVM) is applied to study radiative heat transfer. The blocked-off region procedure is used to treat the geometrical irregularities. Eight cases are considered in order to demonstrate the effect of adding baffles on the walls of the heat recuperator and on the walls of the pyrolysis rooms then choose the best case giving the maximum heat flux transferred to the biomass in the pyrolysis chambers. Ray effect due to the presence of baffles is studied and demonstrated to have a crucial effect on radiative heat flux on the walls of the pyrolysis rooms. Shadow effect caused by the presence of the baffles is also studied. The non grey radiative heat transfer is studied for the real existent configuration. The Weighted Sum of The Grey Gases (WSGG) Model of Kim and Song is used as non grey model. The effect of soot volumetric fraction on the non grey radiative heat flux is investigated and discussed.

Keywords: Baffles, Blocked-off region procedure, FVM, Heat recuperation, Radiative heat transfer, Shadow effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2249
2952 Analyzing Façade Scenarios and Daylight Levels in the Reid Building: A Reflective Case Study on the Designed Daylight under Overcast Sky

Authors: Eman Mayah, Raid Hanna

Abstract:

This study presents the use of daylight in the case study of the Reid building at the Glasgow School of Art in the city of Glasgow, UK. In Nordic countries, daylight is one of the main considerations within building design, especially in the face of long, lightless winters. A shortage of daylight, contributing to dark and gloomy conditions, necessitates that designs incorporate strong daylight performance. As such, the building in question is designed to capture natural light for varying needs, where studios are located on the North and South façades. The study’s approach presents an analysis of different façade scenarios, where daylight from the North is observed, analyzed and compared with the daylight from the South façade for various design studios in the building. The findings then are correlated with the results of daylight levels from the daylight simulation program (Autodesk Ecotect Analysis) for the investigated studios. The study finds there to be a dramatic difference in daylight nature and levels between the North and South façades, where orientation, obstructions and designed façade fenestrations have major effects on the findings. The study concludes that some of the studios positioned on the North façade do not have a desirable quality of diffused northern light, due to the outside building’s obstructions, area and volume of the studio and the shadow effect of the designed mezzanine floor in the studios.

Keywords: Daylight levels, educational building, façade fenestration, overcast weather.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 751
2951 Elastic-Plastic Contact Analysis of Single Layer Solid Rough Surface Model using FEM

Authors: A. Megalingam, M.M.Mayuram

Abstract:

Evaluation of contact pressure, surface and subsurface contact stresses are essential to know the functional response of surface coatings and the contact behavior mainly depends on surface roughness, material property, thickness of layer and the manner of loading. Contact parameter evaluation of real rough surface contacts mostly relies on statistical single asperity contact approaches. In this work, a three dimensional layered solid rough surface in contact with a rigid flat is modeled and analyzed using finite element method. The rough surface of layered solid is generated by FFT approach. The generated rough surface is exported to a finite element method based ANSYS package through which the bottom up solid modeling is employed to create a deformable solid model with a layered solid rough surface on top. The discretization and contact analysis are carried by using the same ANSYS package. The elastic, elastoplastic and plastic deformations are continuous in the present finite element method unlike many other contact models. The Young-s modulus to yield strength ratio of layer is varied in the present work to observe the contact parameters effect while keeping the surface roughness and substrate material properties as constant. The contacting asperities attain elastic, elastoplastic and plastic states with their continuity and asperity interaction phenomena is inherently included. The resultant contact parameters show that neighboring asperity interaction and the Young-s modulus to yield strength ratio of layer influence the bulk deformation consequently affect the interface strength.

Keywords: Asperity interaction, finite element method, rough surface contact, single layered solid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2735