Search results for: Immersed Boundary Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8528

Search results for: Immersed Boundary Method

4628 The Significance of Awareness about Gender Diversity for the Future of Work: A Multi-Method Study of Organizational Structures and Policies Considering Trans and Gender Diversity

Authors: Robin C. Ladwig

Abstract:

The future of work becomes less predictable which requires increasing adaptability of organizations to social and work changes. Society is transforming regarding gender identity in the sense that more people come forward to identify as trans and gender diverse (TGD). Organizations are ill-equipped to provide a safe and encouraging work environment by lacking inclusive organizational structures. The qualitative multi-method research about TGD inclusivity in the workplace explores the enablers and barriers for TGD individuals to satisfactorily engage in the work environment and organizational culture. Furthermore, these TGD insights are analyzed based on organizational implications and awareness from a leadership and management perspective. The semi-structured online interviews with TGD individuals and the photo-elicit open-ended questionnaire addressed to leadership and management in diversity, career development, and human resources have been analyzed with a critical grounded theory approach. Findings demonstrated the significance of TGD voices, the support of leadership and management, as well as the synergy between voices and leadership. Hence, it indicates practical implications such as the revision of exclusive language used in policies, data collection, or communication and reconsideration of organizational decision-making by leaders to include TGD voices.

Keywords: Future of work, occupational identity, organizational decision-making, trans and gender diverse identity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 465
4627 A Microcontroller Implementation of Constrained Model Predictive Control

Authors: Amira Kheriji Abbes, Faouzi Bouani, Mekki Ksouri

Abstract:

Model Predictive Control (MPC) is an established control technique in a wide range of process industries. The reason for this success is its ability to handle multivariable systems and systems having input, output or state constraints. Neverthless comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprisers as well as a transformer of organizations and markets. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In this paper, we propose an efficient firmware for the implementation of constrained MPC in the performed STM32 microcontroller using interior point method. Indeed, performances study shows good execution speed and low computational burden. These results encourage to develop predictive control algorithms to be programmed in industrial standard processes. The PID anti windup controller was also implemented in the STM32 in order to make a performance comparison with the MPC. The main features of the proposed constrained MPC framework are illustrated through two examples.

Keywords: Embedded software, microcontroller, constrainedModel Predictive Control, interior point method, PID antiwindup, Keil tool, C/Cµ language.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2798
4626 Study Punching Shear of Steel Fiber Reinforced Self Compacting Concrete Slabs by Nonlinear Analysis

Authors: Khaled S. Ragab

Abstract:

This paper deals with behavior and capacity of punching shear force for flat slabs produced from steel fiber reinforced self compacting concrete (SFRSCC) by application nonlinear finite element method. Nonlinear finite element analysis on nine slab specimens was achieved by using ANSYS software. A general description of the finite element method, theoretical modeling of concrete and reinforcement are presented. The nonlinear finite element analysis program ANSYS is utilized owing to its capabilities to predict either the response of reinforced concrete slabs in the post elastic range or the ultimate strength of a flat slabs produced from steel fiber reinforced self compacting concrete (SFRSCC). In order to verify the analytical model used in this research using test results of the experimental data, the finite element analysis were performed then a parametric study of the effect ratio of flexural reinforcement, ratio of the upper reinforcement, and volume fraction of steel fibers were investigated. A comparison between the experimental results and those predicted by the existing models are presented. Results and conclusions may be useful for designers, have been raised, and represented.

Keywords: Nonlinear FEM, Punching shear behavior, Flat slabs and Steel fiber reinforced self compacting concrete (SFRSCC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4257
4625 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: Mininet, OpenFlow, POX controller, SDN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2971
4624 Surface Flattening Assisted with 3D Mannequin Based On Minimum Energy

Authors: Shih-Wen Hsiao, Rong-Qi Chen, Chien-Yu Lin

Abstract:

The topic of surface flattening plays a vital role in the field of computer aided design and manufacture. Surface flattening enables the production of 2D patterns and it can be used in design and manufacturing for developing a 3D surface to a 2D platform, especially in fashion design. This study describes surface flattening based on minimum energy methods according to the property of different fabrics. Firstly, through the geometric feature of a 3D surface, the less transformed area can be flattened on a 2D platform by geodesic. Then, strain energy that has accumulated in mesh can be stably released by an approximate implicit method and revised error function. In some cases, cutting mesh to further release the energy is a common way to fix the situation and enhance the accuracy of the surface flattening, and this makes the obtained 2D pattern naturally generate significant cracks. When this methodology is applied to a 3D mannequin constructed with feature lines, it enhances the level of computer-aided fashion design. Besides, when different fabrics are applied to fashion design, it is necessary to revise the shape of a 2D pattern according to the properties of the fabric. With this model, the outline of 2D patterns can be revised by distributing the strain energy with different results according to different fabric properties. Finally, this research uses some common design cases to illustrate and verify the feasibility of this methodology.

Keywords: Surface flattening, Strain energy, Minimum energy, approximate implicit method, Fashion design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2599
4623 Space Vector Pulse Width Modulation Technique Based Design and Simulation of a Three-Phase Voltage Source Converter Systems

Authors: Farhan Beg

Abstract:

A Space Vector based Pulse Width Modulation control technique for the three-phase PWM converter is proposed in this paper. The proposed control scheme is based on a synchronous reference frame model. High performance and efficiency is obtained with regards to the DC bus voltage and the power factor considerations of the PWM rectifier thus leading to low losses. MATLAB/SIMULINK are used as a platform for the simulations and a SIMULINK model is presented in the paper. The results show that the proposed model demonstrates better performance and properties compared to the traditional SPWM method and the method improves the dynamic performance of the closed loop drastically. For the Space Vector based Pulse Width Modulation, Sine signal is the reference waveform and triangle waveform is the carrier waveform. When the value sine signal is large than triangle signal, the pulse will start produce to high. And then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will changed by changing the value of the modulation index and frequency used in this system to produce more pulse width. The more pulse width produced, the output voltage will have lower harmonics contents and the resolution increase.

Keywords: Power Factor, SVPWM, PWM rectifier, SPWM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4023
4622 Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

Authors: Fahad Alhussein, Huaping Liu

Abstract:

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Keywords: Correlation reference waveform receivers, binary offset carrier, multiplexed binary offset carrier, global navigation satellite systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 483
4621 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow

Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi

Abstract:

The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.

Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1510
4620 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Authors: Jaeyoung Lee

Abstract:

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Keywords: Edge network, embedded network, MMA, matrix multiplication accelerator and semantic segmentation network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 467
4619 Silicon-To-Silicon Anodic Bonding via Intermediate Borosilicate Layer for Passive Flow Control Valves

Authors: Luc Conti, Dimitry Dumont-Fillon, Harald van Lintel, Eric Chappel

Abstract:

Flow control valves comprise a silicon flexible membrane that deflects against a substrate, usually made of glass, containing pillars, an outlet hole, and anti-stiction features. However, there is a strong interest in using silicon instead of glass as substrate material, as it would simplify the process flow by allowing the use of well controlled anisotropic etching. Moreover, specific devices demanding a bending of the substrate would also benefit from the inherent outstanding mechanical strength of monocrystalline silicon. Unfortunately, direct Si-Si bonding is not easily achieved with highly structured wafers since residual stress may prevent the good adhesion between wafers. Using a thermoplastic polymer, such as parylene, as intermediate layer is not well adapted to this design as the wafer-to-wafer alignment is critical. An alternative anodic bonding method using an intermediate borosilicate layer has been successfully tested. This layer has been deposited onto the silicon substrate. The bonding recipe has been adapted to account for the presence of the SOI buried oxide and intermediate glass layer in order not to exceed the breakdown voltage. Flow control valves dedicated to infusion of viscous fluids at very high pressure have been made and characterized. The results are compared to previous data obtained using the standard anodic bonding method.

Keywords: Anodic bonding, evaporated glass, microfluidic valve, drug delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855
4618 Influence of Selected Finishing Technologies on the Roughness Parameters of Stainless Steel Manufactured by Selective Laser Melting Method

Authors: J. Hajnys, M. Pagac, J. Petru, P. Stefek, J. Mesicek, J. Kratochvil

Abstract:

The new progressive method of 3D metal printing SLM (Selective Laser Melting) is increasingly expanded into the normal operation. As a result, greater demands are placed on the surface quality of the parts produced in this way. The article deals with research of selected finishing methods (tumbling, face milling, sandblasting, shot peening and brushing) and their impact on the final surface roughness. The 20 x 20 x 7 mm produced specimens using SLM additive technology on the Renishaw AM400 were subjected to testing of these finishing methods by adjusting various parameters. Surface parameters of roughness Sa, Sz were chosen as the evaluation criteria and profile parameters Ra, Rz were used as additional measurements. Optical measurement of surface roughness was performed on Alicona Infinite Focus 5. An experiment conducted to optimize the surface roughness revealed, as expected, that the best roughness parameters were achieved through a face milling operation. Tumbling is particularly suitable for 3D printing components, as tumbling media are able to reach even complex shapes and, after changing to polishing bodies, achieve a high surface gloss. Surface quality after tumbling depends on the process time. Other methods with satisfactory results are shot peening and tumbling, which should be the focus of further research.

Keywords: Additive manufacturing, selective laser melting, surface roughness, stainless steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973
4617 Probabilistic Method of Wind Generation Placement for Congestion Management

Authors: S. Z. Moussavi, A. Badri, F. Rastegar Kashkooli

Abstract:

Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.

Keywords: Probabilistic optimal power flow, Wind power, Pointestimate methods, Congestion management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
4616 Optimum Surface Roughness Prediction in Face Milling of High Silicon Stainless Steel

Authors: M. Farahnakian, M.R. Razfar, S. Elhami-Joosheghan

Abstract:

This paper presents an approach for the determination of the optimal cutting parameters (spindle speed, feed rate, depth of cut and engagement) leading to minimum surface roughness in face milling of high silicon stainless steel by coupling neural network (NN) and Electromagnetism-like Algorithm (EM). In this regard, the advantages of statistical experimental design technique, experimental measurements, artificial neural network, and Electromagnetism-like optimization method are exploited in an integrated manner. To this end, numerous experiments on this stainless steel were conducted to obtain surface roughness values. A predictive model for surface roughness is created by using a back propogation neural network, then the optimization problem was solved by using EM optimization. Additional experiments were performed to validate optimum surface roughness value predicted by EM algorithm. It is clearly seen that a good agreement is observed between the predicted values by EM coupled with feed forward neural network and experimental measurements. The obtained results show that the EM algorithm coupled with back propogation neural network is an efficient and accurate method in approaching the global minimum of surface roughness in face milling.

Keywords: cutting parameters, face milling, surface roughness, artificial neural network, Electromagnetism-like algorithm,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2586
4615 Qualitative and Quantitative Characterization of Generated Waste in Nouri Petrochemical Complex, Assaluyeh, Iran

Authors: L. Heidari, M. Jalili Ghazizade

Abstract:

In recent years, different petrochemical complexes have been established to produce aromatic compounds. Among them, Nouri Petrochemical Complex (NPC) is the largest producer of aromatic raw materials in the world, and is located in south of Iran. Environmental concerns have been raised in this region due to generation of different types of solid waste generated in the process of aromatics production, and subsequently, industrial waste characterization has been thoroughly considered. The aim of this study is qualitative and quantitative characterization of industrial waste generated in the aromatics production process and determination of the best method for industrial waste management. For this purpose, all generated industrial waste during the production process was determined using a checklist. Four main industrial wastes were identified as follows: spent industrial soil, spent catalyst, spent molecular sieves and spent N-formyl morpholine (NFM) solvent. The amount of heavy metals and organic compounds in these wastes were further measured in order to identify the nature and toxicity of such a dangerous compound. Then industrial wastes were classified based on lab analysis results as well as using different international lists of hazardous waste identification such as EPA, UNEP and Basel Convention. Finally, the best method of waste disposal is selected based on environmental, economic and technical aspects. 

Keywords: Spent industrial soil, spent molecular sieve, spent normal ¬formyl -morpholine solvent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 849
4614 Application of Neural Network in User Authentication for Smart Home System

Authors: A. Joseph, D.B.L. Bong, D.A.A. Mat

Abstract:

Security has been an important issue and concern in the smart home systems. Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen. Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. In this paper, a neural network is trained to store the passwords instead of using verification table. This method is useful in solving security problems that happened in some authentication system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the MLPs Neural Network to accelerate the training process. For the Data Part, 200 sets of UserID and Passwords were created and encoded into binary as the input. The simulation had been carried out to evaluate the performance for different number of hidden neurons and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the results obtained, using Tansig and Purelin in hidden and output layer and 250 hidden neurons gave the better performance. As a result, a password-based user authentication system for smart home by using neural network had been developed successfully.

Keywords: Neural Network, User Authentication, Smart Home, Security

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
4613 Chaotic Properties of Hemodynamic Responsein Functional Near Infrared Spectroscopic Measurement of Brain Activity

Authors: Ni Ni Soe , Masahiro Nakagawa

Abstract:

Functional near infrared spectroscopy (fNIRS) is a practical non-invasive optical technique to detect characteristic of hemoglobin density dynamics response during functional activation of the cerebral cortex. In this paper, fNIRS measurements were made in the area of motor cortex from C4 position according to international 10-20 system. Three subjects, aged 23 - 30 years, were participated in the experiment. The aim of this paper was to evaluate the effects of different motor activation tasks of the hemoglobin density dynamics of fNIRS signal. The chaotic concept based on deterministic dynamics is an important feature in biological signal analysis. This paper employs the chaotic properties which is a novel method of nonlinear analysis, to analyze and to quantify the chaotic property in the time series of the hemoglobin dynamics of the various motor imagery tasks of fNIRS signal. Usually, hemoglobin density in the human brain cortex is found to change slowly in time. An inevitable noise caused by various factors is to be included in a signal. So, principle component analysis method (PCA) is utilized to remove high frequency component. The phase pace is reconstructed and evaluated the Lyapunov spectrum, and Lyapunov dimensions. From the experimental results, it can be conclude that the signals measured by fNIRS are chaotic.

Keywords: Chaos, hemoglobin, Lyapunov spectrum, motorimagery, near infrared spectroscopy (NIRS), principal componentanalysis (PCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1727
4612 An Optimal Load Shedding Approach for Distribution Networks with DGs considering Capacity Deficiency Modelling of Bulked Power Supply

Authors: A. R. Malekpour, A.R. Seifi

Abstract:

This paper discusses a genetic algorithm (GA) based optimal load shedding that can apply for electrical distribution networks with and without dispersed generators (DG). Also, the proposed method has the ability for considering constant and variable capacity deficiency caused by unscheduled outages in the bulked generation and transmission system of bulked power supply. The genetic algorithm (GA) is employed to search for the optimal load shedding strategy in distribution networks considering DGs in two cases of constant and variable modelling of bulked power supply of distribution networks. Electrical power distribution systems have a radial network and unidirectional power flows. With the advent of dispersed generations, the electrical distribution system has a locally looped network and bidirectional power flows. Therefore, installed DG in the electrical distribution systems can cause operational problems and impact on existing operational schemes. Introduction of DGs in electrical distribution systems has introduced many new issues in operational and planning level. Load shedding as one of operational issue has no exempt. The objective is to minimize the sum of curtailed load and also system losses within the frame-work of system operational and security constraints. The proposed method is tested on a radial distribution system with 33 load points for more practical applications.

Keywords: DG, Load shedding, Optimization, Capacity Deficiency Modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
4611 Twitter Sentiment Analysis during the Lockdown on New Zealand

Authors: Smah Doeban Almotiri

Abstract:

One of the most common fields of natural language processing (NLP) is sentimental analysis. The inferred feeling in the text can be successfully mined for various events using sentiment analysis. Twitter is viewed as a reliable data point for sentimental analytics studies since people are using social media to receive and exchange different types of data on a broad scale during the COVID-19 epidemic. The processing of such data may aid in making critical decisions on how to keep the situation under control. The aim of this research is to look at how sentimental states differed in a single geographic region during the lockdown at two different times.1162 tweets were analyzed related to the COVID-19 pandemic lockdown using keywords hashtags (lockdown, COVID-19) for the first sample tweets were from March 23, 2020, until April 23, 2020, and the second sample for the following year was from March 1, 2021, until April 4, 2021. Natural language processing (NLP), which is a form of Artificial intelligent was used for this research to calculate the sentiment value of all of the tweets by using AFINN Lexicon sentiment analysis method. The findings revealed that the sentimental condition in both different times during the region's lockdown was positive in the samples of this study, which are unique to the specific geographical area of New Zealand. This research suggests applied machine learning sentimental method such as Crystal Feel and extended the size of the sample tweet by using multiple tweets over a longer period of time.

Keywords: sentiment analysis, Twitter analysis, lockdown, Covid-19, AFINN, NodeJS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 585
4610 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675
4609 A Proposed Hybrid Color Image Compression Based on Fractal Coding with Quadtree and Discrete Cosine Transform

Authors: Shimal Das, Dibyendu Ghoshal

Abstract:

Fractal based digital image compression is a specific technique in the field of color image. The method is best suited for irregular shape of image like snow bobs, clouds, flame of fire; tree leaves images, depending on the fact that parts of an image often resemble with other parts of the same image. This technique has drawn much attention in recent years because of very high compression ratio that can be achieved. Hybrid scheme incorporating fractal compression and speedup techniques have achieved high compression ratio compared to pure fractal compression. Fractal image compression is a lossy compression method in which selfsimilarity nature of an image is used. This technique provides high compression ratio, less encoding time and fart decoding process. In this paper, fractal compression with quad tree and DCT is proposed to compress the color image. The proposed hybrid schemes require four phases to compress the color image. First: the image is segmented and Discrete Cosine Transform is applied to each block of the segmented image. Second: the block values are scanned in a zigzag manner to prevent zero co-efficient. Third: the resulting image is partitioned as fractals by quadtree approach. Fourth: the image is compressed using Run length encoding technique.

Keywords: Fractal coding, Discrete Cosine Transform, Iterated Function System (IFS), Affine Transformation, Run length encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
4608 Tool Wear of Aluminum/Chromium/Tungsten-Based-Coated Cemented Carbide Tools in Cutting Sintered Steel

Authors: Tadahiro Wada, Hiroyuki Hanyu

Abstract:

In this study, to clarify the effectiveness of an aluminum/chromium/tungsten-based-coated tool for cutting sintered steel, tool wear was experimentally investigated. The sintered steel was turned with the (Al60,Cr25,W15)N-, (Al60,Cr25,W15)(C,N)- and (Al64,Cr28,W8)(C,N)-coated cemented carbide tools according to the physical vapor deposition (PVD) method. Moreover, the tool wear of the aluminum/chromium/tungsten-based-coated item was compared with that of the (Al,Cr)N coated tool. Furthermore, to clarify the tool wear mechanism of the aluminum/chromium/tungsten-coating film for cutting sintered steel, Scanning Electron Microscope observation and Energy Dispersive x-ray Spectroscopy mapping analysis were conducted on the abraded surface. The following results were obtained: (1) The wear progress of the (Al64,Cr28,W8)(C,N)-coated tool was the slowest among that of the five coated tools. (2) Adding carbon (C) to the aluminum/chromium/tungsten-based-coating film was effective for improving the wear-resistance. (3) The main wear mechanism of the (Al60,Cr25,W15)N-, the (Al60,Cr25,W15)(C,N)- and the (Al64,Cr28,W8)(C,N)-coating films was abrasive wear.

Keywords: Cutting, physical vapor deposition coating method, tool wear, tool wear mechanism, sintered steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
4607 Hazard Identification and Sensitivity of Potential Resource of Emergency Water Supply

Authors: A. Bumbová, M. Čáslavský, F. Božek, J. Dvořák, E. Bakoš

Abstract:

The paper presents the case study of hazard identification and sensitivity of potential resource of emergency water supply as part of the application of methodology classifying the resources of drinking water for emergency supply of population. The case study has been carried out on a selected resource of emergency water supply in one region of the Czech Republic. The hazard identification and sensitivity of potential resource of emergency water supply is based on a unique procedure and developed general registers of selected types of hazards and sensitivities. The registers have been developed with the help of the “Fault Tree Analysis” method in combination with the “What if method”. The identified hazards for the assessed resource include hailstorms and torrential rains, drought, soil erosion, accidents of farm machinery, and agricultural production. The developed registers of hazards and vulnerabilities and a semi-quantitative assessment of hazards for individual parts of hydrological structure and technological elements of presented drilled wells are the basis for a semi-quantitative risk assessment of potential resource of emergency supply of population and the subsequent classification of such resource within the system of crisis planning.

Keywords: Hazard identification, register of hazards, sensitivity identification, register of sensitivity, emergency water supply, state of crisis, resource of emergency water supply, ground water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
4606 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm

Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn

Abstract:

Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.

Keywords: Binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731
4605 Quality Service Standard of Food and Beverage Service Staff in Hotel

Authors: Thanasit Suksutdhi

Abstract:

This survey research aims to study the standard of service quality of food and beverage service staffs in hotel business by studying the service standard of three sample hotels, Siam Kempinski Hotel Bangkok, Four Seasons Resort Chiang Mai, and Banyan Tree Phuket. In order to find the international service standard of food and beverage service, triangular research, i.e. quantitative, qualitative, and survey were employed. In this research, questionnaires and in-depth interview were used for getting the information on the sequences and method of services. There were three parts of modified questionnaires to measure service quality and guest’s satisfaction including service facilities, attentiveness, responsibility, reliability, and circumspection. This study used sample random sampling to derive subjects with the return rate of the questionnaires was 70% or 280. Data were analyzed by SPSS to find arithmetic mean, SD, percentage, and comparison by t-test and One-way ANOVA. The results revealed that the service quality of the three hotels were in the international level which could create high satisfaction to the international customers. Recommendations for research implementations were to maintain the area of good service quality, and to improve some dimensions of service quality such as reliability. Training in service standard, product knowledge, and new technology for employees should be provided. Furthermore, in order to develop the service quality of the industry, training collaboration between hotel organization and educational institutions in food and beverage service should be considered.

Keywords: Service standard, food and beverage department, sequence of service, service method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7804
4604 Development of EPID-based Real time Dose Verification for Dynamic IMRT

Authors: Todsaporn Fuangrod, Daryl J. O'Connor, Boyd MC McCurdy, Peter B. Greer

Abstract:

An electronic portal image device (EPID) has become a method of patient-specific IMRT dose verification for radiotherapy. Research studies have focused on pre and post-treatment verification, however, there are currently no interventional procedures using EPID dosimetry that measure the dose in real time as a mechanism to ensure that overdoses do not occur and underdoses are detected as soon as is practically possible. As a result, an EPID-based real time dose verification system for dynamic IMRT was developed and was implemented with MATLAB/Simulink. The EPID image acquisition was set to continuous acquisition mode at 1.4 images per second. The system defined the time constraint gap, or execution gap at the image acquisition time, so that every calculation must be completed before the next image capture is completed. In addition, the <=-evaluation method was used for dose comparison, with two types of comparison processes; individual image and cumulative dose comparison monitored. The outputs of the system are the <=-map, the percent of <=<1, and mean-<= versus time, all in real time. Two strategies were used to test the system, including an error detection test and a clinical data test. The system can monitor the actual dose delivery compared with the treatment plan data or previous treatment dose delivery that means a radiation therapist is able to switch off the machine when the error is detected.

Keywords: real-time dose verification, EPID dosimetry, simulation, dynamic IMRT

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
4603 Optimal Current Control of Externally Excited Synchronous Machines in Automotive Traction Drive Applications

Authors: Oliver Haala, Bernhard Wagner, Maximilian Hofmann, Martin Marz

Abstract:

The excellent suitability of the externally excited synchronous machine (EESM) in automotive traction drive applications is justified by its high efficiency over the whole operation range and the high availability of materials. Usually, maximum efficiency is obtained by modelling each single loss and minimizing the sum of all losses. As a result, the quality of the optimization highly depends on the precision of the model. Moreover, it requires accurate knowledge of the saturation dependent machine inductances. Therefore, the present contribution proposes a method to minimize the overall losses of a salient pole EESM and its inverter in steady state operation based on measurement data only. Since this method does not require any manufacturer data, it is well suited for an automated measurement data evaluation and inverter parametrization. The field oriented control (FOC) of an EESM provides three current components resp. three degrees of freedom (DOF). An analytic minimization of the copper losses in the stator and the rotor (assuming constant inductances) is performed and serves as a first approximation of how to choose the optimal current reference values. After a numeric offline minimization of the overall losses based on measurement data the results are compared to a control strategy that satisfies cos (ϕ) = 1.

Keywords: Current control, efficiency, externally excited synchronous machine, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4395
4602 Distributed Coordination of Connected and Automated Vehicles at Multiple Interconnected Intersections

Authors: Zhiyuan Du, Baisravan Hom Chaudhuri, Pierluigi Pisu

Abstract:

In connected vehicle systems where wireless communication is available among the involved vehicles and intersection controllers, it is possible to design an intersection coordination strategy that leads the connected and automated vehicles (CAVs) travel through the road intersections without the conventional traffic light control. In this paper, we present a distributed coordination strategy for the CAVs at multiple interconnected intersections that aims at improving system fuel efficiency and system mobility. We present a distributed control solution where in the higher level, the intersection controllers calculate the road desired average velocity and optimally assign reference velocities of each vehicle. In the lower level, every vehicle is considered to use model predictive control (MPC) to track their reference velocity obtained from the higher level controller. The proposed method has been implemented on a simulation-based case with two-interconnected intersection network. Additionally, the effects of mixed vehicle types on the coordination strategy has been explored. Simulation results indicate the improvement on vehicle fuel efficiency and traffic mobility of the proposed method.

Keywords: Connected vehicles, automated vehicles, intersection coordination systems, multiple interconnected intersections, model predictive control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
4601 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: Thailand tourism, maximum entropy bootstrapping approach, macroeconomic model, asymmetric information.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1264
4600 Construction of Attitude Reference Benchmark for Test of Star Sensor Based on Precise Timing

Authors: Tingting Lu, Yonghai Wang, Haiyong Wang, Jiaqi Liu

Abstract:

To satisfy the need of outfield tests of star sensors, a method is put forward to construct the reference attitude benchmark. Firstly, its basic principle is introduced; Then, all the separate conversion matrixes are deduced, which include: the conversion matrix responsible for the transformation from the Earth Centered Inertial frame i to the Earth-centered Earth-fixed frame w according to the time of an atomic clock, the conversion matrix from frame w to the geographic frame t, and the matrix from frame t to the platform frame p, so the attitude matrix of the benchmark platform relative to the frame i can be obtained using all the three matrixes as the multiplicative factors; Next, the attitude matrix of the star sensor relative to frame i is got when the mounting matrix from frame p to the star sensor frame s is calibrated, and the reference attitude angles for star sensor outfield tests can be calculated from the transformation from frame i to frame s; Finally, the computer program is finished to solve the reference attitudes, and the error curves are drawn about the three axis attitude angles whose absolute maximum error is just 0.25ÔÇ│. The analysis on each loop and the final simulating results manifest that the method by precise timing to acquire the absolute reference attitude is feasible for star sensor outfield tests.

Keywords: Atomic time, attitude determination, coordinate conversion, inertial coordinate system, star sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1200
4599 Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications

Authors: G. Ramana Murthy, C. Senthilpari, P. Velrajkumar, Lim Tien Sze

Abstract:

In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.

Keywords: CSA Full Adder, Delay unit, IIR filter, Low-Power, PDP, Parametric Analysis, Propagation Delay, Throughput, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3815