Search results for: Continuous Genetic Algorithm (C-GA)
401 Wastewater Treatment in Moving-Bed Biofilm Reactor operated by Flow Reversal Intermittent Aeration System
Authors: B. K. Kim, D. Chang, D. J. Son, D. W. Kim, J. K. Choi, H. J. Yeon, C. Y. Yoon, Y. Fan, S. Y. Lim, K. H. Hong
Abstract:
Intermittent aeration process can be easily applied on the existing activated sludge system and is highly reliable against the loading changes. It can be operated in a relatively simple way as well. Since the moving-bed biofilm reactor method processes pollutants by attaching and securing the microorganisms on the media, the process efficiency can be higher compared to the suspended growth biological treatment process, and can reduce the return of sludge. In this study, the existing intermittent aeration process with alternating flow being applied on the oxidation ditch is applied on the continuous flow stirred tank reactor with advantages from both processes, and we would like to develop the process to significantly reduce the return of sludge in the clarifier and to secure the reliable quality of treated water by adding the moving media. Corresponding process has the appropriate form as an infrastructure based on u- environment in future u- City and is expected to accelerate the implementation of u-Eco city in conjunction with city based services. The system being conducted in a laboratory scale has been operated in HRT 8hours except for the final clarifier and showed the removal efficiency of 97.7 %, 73.1 % and 9.4 % in organic matters, TN and TP, respectively with operating range of 4hour cycle on system SRT 10days. After adding the media, the removal efficiency of phosphorus showed a similar level compared to that before the addition, but the removal efficiency of nitrogen was improved by 7~10 %. In addition, the solids which were maintained in MLSS 1200~1400 at 25 % of media packing were attached all onto the media, which produced no sludge entering the clarifier. Therefore, the return of sludge is not needed any longer.Keywords: Municipal wastewater treatment, Biological nutrient removal, Alternating flow intermittent aeration system, Reversal flow intermittent aeration system, Moving-bed biofilm reactor, CFSTR, u-City, u-Eco city
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2323400 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study
Authors: Si Mon Kueh, Tom J. Kazmierski
Abstract:
There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.Keywords: Artificial Neural Networks, bit-serial neural processor, FPGA, Neural Processing Element.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573399 Partnering with Stakeholders to Secure Digitization of Water
Authors: Sindhu Govardhan, Kenneth G. Crowther
Abstract:
Modernisation of the water sector is leading to increased connectivity and integration of emerging technologies with traditional ones, leading to new security risks. The convergence of Information Technology (IT) with Operation Technology (OT) results in solutions that are spread across larger geographic areas, increasingly consist of interconnected Industrial Internet of Things (IIOT) devices and software, rely on the integration of legacy with modern technologies, use of complex supply chain components leading to complex architectures and communication paths. The result is that multiple parties collectively own and operate these emergent technologies, threat actors find new paths to exploit, and traditional cybersecurity controls are inadequate. Our approach is to explicitly identify and draw data flows that cross trust boundaries between owners and operators of various aspects of these emerging and interconnected technologies. On these data flows, we layer potential attack vectors to create a frame of reference for evaluating possible risks against connected technologies. Finally, we identify where existing controls, mitigations, and other remediations exist across industry partners (e.g., suppliers, product vendors, integrators, water utilities, and regulators). From these, we are able to understand potential gaps in security, the roles in the supply chain that are most likely to effectively remediate those security gaps, and test cases to evaluate and strengthen security across these partners. This informs a “shared responsibility” solution that recognises that security is multi-layered and requires collaboration to be successful. This shared responsibility security framework improves visibility, understanding, and control across the entire supply chain, and particularly for those water utilities that are accountable for safe and continuous operations.
Keywords: Cyber security, shared responsibility, IIOT, threat modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168398 Impact of Quality Assurance Mechanisms on the Work Efficiency of Staff in the Educational Space of Georgia
Authors: B. Gechbaia, K. Goletiani, G. Gabedava, N. Mikeltadze
Abstract:
At this stage, Georgia is a country which is actively involved in the European integration process, for which the primary priority is effective integration in the European education system. The modern Georgian higher education system is the process of establishing a new sociocultural reality, whose main priorities are determined by the Quality System as a continuous cycle of planning, implementation, checking and acting. Obviously, in this situation, the issue of management of education institutions comes out in the foreground, since the proper planning and implementation of personnel management processes is one of the main determinants of the company's performance. At the same time, one of the most important factors is the psychological comfort of the personnel, ensuring their protection and efficiency of stress management policy.
The purpose of this research is to determine how intensely the relationship is between the psychological comfort of the personnel and the efficiency of the quality system in the institution as the quality assurance mechanisms of educational institutions affect the stability of personnel, prevention and management of the stressful situation. The research was carried out within the framework of the Internal Grant Project «The Role of Organizational Culture in the Process of Settlement of Management of Stress and Conflict, Georgian Reality and European Experience » of the Batumi Navigation Teaching University, based on the analysis of the survey results of target groups. The small-scale research conducted by us has revealed that the introduction of quality assurance system and its active implementation increased the quality of management of Georgian educational institutions, increased the level of universal engagement in internal and external processes and as a result, it has improved the quality of education as well as social and psychological comfort indicators of the society.
Keywords: Quality assurance, effective management, stability of personnel, psychological comfort, stress management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1144397 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance
Authors: Emad Alenany, M. Adel El-Baz
Abstract:
In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.Keywords: Queueing network, discrete-event simulation, health applications, SPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529396 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling
Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow
Abstract:
Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.Keywords: Dynamic modeling, missing data, multiple imputation, physiological measures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810395 Surface and Bulk Magnetization Behavior of Isolated Ferromagnetic NiFe Nanowires
Authors: Musaab Salman Sultan
Abstract:
The surface and bulk magnetization behavior of template released isolated ferromagnetic Ni60Fe40 nanowires of relatively thick diameters (~200 nm), deposited from a dilute suspension onto pre-patterned insulating chips have been investigated experimentally, using a highly sensitive Magneto-Optical Ker Effect (MOKE) magnetometry and Magneto-Resistance (MR) measurements, respectively. The MR data were consistent with the theoretical predictions of the anisotropic magneto-resistance (AMR) effect. The MR measurements, in all the angles of investigations, showed large features and a series of nonmonotonic "continuous small features" in the resistance profiles. The extracted switching fields from these features and from MOKE loops were compared with each other and with the switching fields reported in the literature that adopted the same analytical techniques on the similar compositions and dimensions of nanowires. A large difference between MOKE and MR measurments was noticed. The disparate between MOKE and MR results is attributed to the variance in the micro-magnetic structure of the surface and the bulk of such ferromagnetic nanowires. This result was ascertained using micro-magnetic simulations on an individual: cylindrical and rectangular cross sections NiFe nanowires, with the same diameter/thickness of the experimental wires, using the Object Oriented Micro-magnetic Framework (OOMMF) package where the simulated loops showed different switching events, indicating that such wires have different magnetic states in the reversal process and the micro-magnetic spin structures during switching behavior was complicated. These results further supported the difference between surface and bulk magnetization behavior in these nanowires. This work suggests that a combination of MOKE and MR measurements is required to fully understand the magnetization behavior of such relatively thick isolated cylindrical ferromagnetic nanowires.
Keywords: MOKE magnetometry, MR measurements, OOMMF package, micro-magnetic simulations, ferromagnetic nanowires, surface magnetic properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 763394 A Secure Semi-Fragile Watermarking Scheme for Authentication and Recovery of Images Based On Wavelet Transform
Authors: Rafiullah Chamlawi, Asifullah Khan, Adnan Idris, Zahid Munir
Abstract:
Authentication of multimedia contents has gained much attention in recent times. In this paper, we propose a secure semi-fragile watermarking, with a choice of two watermarks to be embedded. This technique operates in integer wavelet domain and makes use of semi fragile watermarks for achieving better robustness. A self-recovering algorithm is employed, that hides the image digest into some Wavelet subbands to detect possible malevolent object manipulation undergone by the image (object replacing and/or deletion). The Semi-fragility makes the scheme tolerant for JPEG lossy compression as low as quality of 70%, and locate the tempered area accurately. In addition, the system ensures more security because the embedded watermarks are protected with private keys. The computational complexity is reduced using parameterized integer wavelet transform. Experimental results show that the proposed scheme guarantees the safety of watermark, image recovery and location of the tempered area accurately.
Keywords: Integer Wavelet Transform (IWT), Discrete Cosine Transform (DCT), JPEG Compression, Authentication and Self- Recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2084393 Medical Image Fusion Based On Redundant Wavelet Transform and Morphological Processing
Authors: P. S. Gomathi, B. Kalaavathi
Abstract:
The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.
Keywords: Discrete Wavelet Transform (DWT), Image Fusion, Morphological Processing, Redundant Wavelet Transform (RWT).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2157392 Statistics over Lyapunov Exponents for Feature Extraction: Electroencephalographic Changes Detection Case
Authors: Elif Derya UBEYLI, Inan GULER
Abstract:
A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Keywords: Chaotic signal, Electroencephalogram (EEG) signals, Feature extraction/selection, Lyapunov exponents
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2509391 Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm
Authors: V. V. Singh, Yusuf Ibrahim Gwanda, Rajesh Prasad
Abstract:
In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.Keywords: Reliability, availability Gumbel-Hougaard family copula, MTTF, internet data center.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870390 Harmonizing Spatial Plans: A Methodology to Integrate Sustainable Mobility and Energy Plans to Promote Resilient City Planning
Authors: B. Sanchez, D. Zambrana-Vasquez, J. Fresner, C. Krenn, F. Morea, L. Mercatelli
Abstract:
Local administrations are facing established targets on sustainable development from different disciplines at the heart of different city departments. Nevertheless, some of these targets, such as CO2 reduction, relate to two or more disciplines, as it is the case of sustainable mobility and energy plans (SUMP & SECAP/SEAP). This opens up the possibility to efficiently cooperate among different city departments and to create and develop harmonized spatial plans by using available resources and together achieving more ambitious goals in cities. The steps of the harmonization processes developed result in the identification of areas to achieve common strategic objectives. Harmonization, in other words, helps different departments in local authorities to work together and optimize the use or resources by sharing the same vision, involving key stakeholders, and promoting common data assessment to better optimize the resources. A methodology to promote resilient city planning via the harmonization of sustainable mobility and energy plans is presented in this paper. In order to validate the proposed methodology, a representative city engaged in an innovation process in efficient spatial planning is used as a case study. The harmonization process of sustainable mobility and energy plans covers identifying matching targets between different fields, developing different spatial plans with dual benefit and common indicators guaranteeing the continuous improvement of the harmonized plans. The proposed methodology supports local administrations in consistent spatial planning, considering both energy efficiency and sustainable mobility. Thus, municipalities can use their human and economic resources efficiently. This guarantees an efficient upgrade of land use plans integrating energy and mobility aspects in order to achieve sustainability targets, as well as to improve the wellbeing of its citizens.
Keywords: Harmonized planning, spatial planning, sustainable energy, sustainable mobility, SECAP, SUMP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 779389 A Real Time Ultra-Wideband Location System for Smart Healthcare
Authors: Mingyang Sun, Guozheng Yan, Dasheng Liu, Lei Yang
Abstract:
Driven by the demand of intelligent monitoring in rehabilitation centers or hospitals, a high accuracy real-time location system based on UWB (ultra-wideband) technology was proposed. The system measures precise location of a specific person, traces his movement and visualizes his trajectory on the screen for doctors or administrators. Therefore, doctors could view the position of the patient at any time and find them immediately and exactly when something emergent happens. In our design process, different algorithms were discussed, and their errors were analyzed. In addition, we discussed about a , simple but effective way of correcting the antenna delay error, which turned out to be effective. By choosing the best algorithm and correcting errors with corresponding methods, the system attained a good accuracy. Experiments indicated that the ranging error of the system is lower than 7 cm, the locating error is lower than 20 cm, and the refresh rate exceeds 5 times per second. In future works, by embedding the system in wearable IoT (Internet of Things) devices, it could provide not only physical parameters, but also the activity status of the patient, which would help doctors a lot in performing healthcare.Keywords: Intelligent monitoring, IoT devices, real-time location, smart healthcare, ultra-wideband technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890388 Influence of Tool Geometry on Surface Roughness and Tool Wear When Turning AISI 304L Using Taguchi Optimisation Methodology
Authors: Salah Gariani, Taher Dao, Ahmed Lajili
Abstract:
This paper presents an experimental optimisation of surface roughness (Ra) and tool wear in the precision turning of AISI 304L alloy using a wiper and conventional cutting tools under wet cutting conditions. The machining trials were conducted based on Taguchi methodology employing an L9 orthogonal array design with four process parameters: feed rate, spindle speed, depth of cut, and cutting tool type. The experimental results were utilised to characterise the main factors affecting Ra and tool wear using the analyses of means (AOM) and variance (ANOVA). The results show that the wiper tools outperformed conventional tools in terms of surface quality and tool wear at optimal cutting conditions. The ANOVA results indicate that the main factors contributing to lower Ra are cutting tool type and feed rate, with percentage contribution ratios (PCRs) of 58.69% and 25.18% respectively. This confirms that tool type is the most significant factor affecting surface quality when turning AISI 304L. Additionally, a substantial reduction in tool wear was observed when a wiper insert was used, whereas noticeable increases in tool wear occurred when higher cutting speeds were employed for both tool types. These trends confirm the ANOVA outcomes that cutting speed has a significant effect on tool wear, with a PCR value of 39.22%, followed by tool type with a PCR of 27.40%. All machining trials generated similar continuous spiral or curl-shaped chips. A noticeable difference was found in the radius of the produced curl-shaped chips at different cutting speeds when turning AISI 304L under wet cutting conditions.
Keywords: AISI 304L alloy, conventional and wiper carbide tools, wet turning, average surface roughness, tool wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156387 Local Curvelet Based Classification Using Linear Discriminant Analysis for Face Recognition
Authors: Mohammed Rziza, Mohamed El Aroussi, Mohammed El Hassouni, Sanaa Ghouzali, Driss Aboutajdine
Abstract:
In this paper, an efficient local appearance feature extraction method based the multi-resolution Curvelet transform is proposed in order to further enhance the performance of the well known Linear Discriminant Analysis(LDA) method when applied to face recognition. Each face is described by a subset of band filtered images containing block-based Curvelet coefficients. These coefficients characterize the face texture and a set of simple statistical measures allows us to form compact and meaningful feature vectors. The proposed method is compared with some related feature extraction methods such as Principal component analysis (PCA), as well as Linear Discriminant Analysis LDA, and independent component Analysis (ICA). Two different muti-resolution transforms, Wavelet (DWT) and Contourlet, were also compared against the Block Based Curvelet-LDA algorithm. Experimental results on ORL, YALE and FERET face databases convince us that the proposed method provides a better representation of the class information and obtains much higher recognition accuracies.Keywords: Curvelet, Linear Discriminant Analysis (LDA) , Contourlet, Discreet Wavelet Transform, DWT, Block-based analysis, face recognition (FR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1808386 A Monte Carlo Method to Data Stream Analysis
Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham
Abstract:
Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417385 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company
Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze
Abstract:
As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.Keywords: Taguchi Robust Design, signal to noise ratio, Single Minute Exchange of Dies, lean production system, waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975384 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3160383 Traffic Behaviour of VoIP in a Simulated Access Network
Authors: Jishu Das Gupta, Srecko Howard, Angela Howard
Abstract:
Insufficient Quality of Service (QoS) of Voice over Internet Protocol (VoIP) is a growing concern that has lead the need for research and study. In this paper we investigate the performance of VoIP and the impact of resource limitations on the performance of Access Networks. The impact of VoIP performance in Access Networks is particularly important in regions where Internet resources are limited and the cost of improving these resources is prohibitive. It is clear that perceived VoIP performance, as measured by mean opinion score [2] in experiments, where subjects are asked to rate communication quality, is determined by end-to-end delay on the communication path, delay variation, packet loss, echo, the coding algorithm in use and noise. These performance indicators can be measured and the affect in the Access Network can be estimated. This paper investigates the congestion in the Access Network to the overall performance of VoIP services with the presence of other substantial uses of internet and ways in which Access Networks can be designed to improve VoIP performance. Methods for analyzing the impact of the Access Network on VoIP performance will be surveyed and reviewed. This paper also considers some approaches for improving performance of VoIP by carrying out experiments using Network Simulator version 2 (NS2) software with a view to gaining a better understanding of the design of Access Networks.Keywords: Codec, DiffServ, Droptail, RED, VOIP
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595382 Application and Assessment of Artificial Neural Networks for Biodiesel Iodine Value Prediction
Authors: Raquel M. de Sousa, Sofiane Labidi, Allan Kardec D. Barros, Alex O. Barradas Filho, Aldalea L. B. Marques
Abstract:
Several parameters are established in order to measure biodiesel quality. One of them is the iodine value, which is an important parameter that measures the total unsaturation within a mixture of fatty acids. Limitation of unsaturated fatty acids is necessary since warming of higher quantity of these ones ends in either formation of deposits inside the motor or damage of lubricant. Determination of iodine value by official procedure tends to be very laborious, with high costs and toxicity of the reagents, this study uses artificial neural network (ANN) in order to predict the iodine value property as an alternative to these problems. The methodology of development of networks used 13 esters of fatty acids in the input with convergence algorithms of back propagation of back propagation type were optimized in order to get an architecture of prediction of iodine value. This study allowed us to demonstrate the neural networks’ ability to learn the correlation between biodiesel quality properties, in this caseiodine value, and the molecular structures that make it up. The model developed in the study reached a correlation coefficient (R) of 0.99 for both network validation and network simulation, with Levenberg-Maquardt algorithm.Keywords: Artificial Neural Networks, Biodiesel, Iodine Value, Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2380381 Applying Similarity Theory and Hilbert Huang Transform for Estimating the Differences of Pig-s Blood Pressure Signals between Situations of Intestinal Artery Blocking and Unblocking
Authors: Jia-Rong Yeh, Tzu-Yu Lin, Jiann-Shing Shieh, Yun Chen
Abstract:
A mammal-s body can be seen as a blood vessel with complex tunnels. When heart pumps blood periodically, blood runs through blood vessels and rebounds from walls of blood vessels. Blood pressure signals can be measured with complex but periodic patterns. When an artery is clamped during a surgical operation, the spectrum of blood pressure signals will be different from that of normal situation. In this investigation, intestinal artery clamping operations were conducted to a pig for simulating the situation of intestinal blocking during a surgical operation. Similarity theory is a convenient and easy tool to prove that patterns of blood pressure signals of intestinal artery blocking and unblocking are surely different. And, the algorithm of Hilbert Huang Transform can be applied to extract the character parameters of blood pressure pattern. In conclusion, the patterns of blood pressure signals of two different situations, intestinal artery blocking and unblocking, can be distinguished by these character parameters defined in this paper.Keywords: Blood pressure, spectrum, intestinal artery, similarity theory and Hilbert Huang Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625380 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation
Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang
Abstract:
Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.Keywords: Computing methodologies, interest point, salient region detections, image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 865379 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations
Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward
Abstract:
A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.
Keywords: Critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625378 Long Short-Term Memory Based Model for Modeling Nicotine Consumption Using an Electronic Cigarette and Internet of Things Devices
Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi
Abstract:
In this paper, we want to determine whether the accurate prediction of nicotine concentration can be obtained by using a network of smart objects and an e-cigarette. The approach consists of, first, the recognition of factors influencing smoking cessation such as physical activity recognition and participant’s behaviors (using both smartphone and smartwatch), then the prediction of the configuration of the e-cigarette (in terms of nicotine concentration, power, and resistance of e-cigarette). The study uses a network of commonly connected objects; a smartwatch, a smartphone, and an e-cigarette transported by the participants during an uncontrolled experiment. The data obtained from sensors carried in the three devices were trained by a Long short-term memory algorithm (LSTM). Results show that our LSTM-based model allows predicting the configuration of the e-cigarette in terms of nicotine concentration, power, and resistance with a root mean square error percentage of 12.9%, 9.15%, and 11.84%, respectively. This study can help to better control consumption of nicotine and offer an intelligent configuration of the e-cigarette to users.
Keywords: Iot, activity recognition, automatic classification, unconstrained environment, deep neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133377 Removal of Rhodamine B from Aqueous Solution Using Natural Clay by Fixed Bed Column Method
Abstract:
The discharge of dye in industrial effluents is of great concern because their presence and accumulation have a toxic or carcinogenic effect on living species. The removal of such compounds at such low levels is a difficult problem. The adsorption process is an effective and attractive proposition for the treatment of dye contaminated wastewater. Activated carbon adsorption in fixed beds is a very common technology in the treatment of water and especially in processes of decolouration. However, it is expensive and the powdered one is difficult to be separated from aquatic system when it becomes exhausted or the effluent reaches the maximum allowable discharge level. The regeneration of exhausted activated carbon by chemical and thermal procedure is also expensive and results in loss of the sorbent. The focus of this research was to evaluate the adsorption potential of the raw clay in removing rhodamine B from aqueous solutions using a laboratory fixed-bed column. The continuous sorption process was conducted in this study in order to simulate industrial conditions. The effect of process parameters, such as inlet flow rate, adsorbent bed height, and initial adsorbate concentration on the shape of breakthrough curves was investigated. A glass column with an internal diameter of 1.5 cm and height of 30 cm was used as a fixed-bed column. The pH of feed solution was set at 8.5. Experiments were carried out at different bed heights (5 - 20 cm), influent flow rates (1.6- 8 mL/min) and influent rhodamine B concentrations (20 - 80 mg/L). The obtained results showed that the adsorption capacity increases with the bed depth and the initial concentration and it decreases at higher flow rate. The column regeneration was possible for four adsorption–desorption cycles. The clay column study states the value of the excellent adsorption capacity for the removal of rhodamine B from aqueous solution. Uptake of rhodamine B through a fixed-bed column was dependent on the bed depth, influent rhodamine B concentration, and flow rate.Keywords: Adsorption, Breakthrough curve, Clay, Fixed bed column, Rhodamine B, Regeneration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675376 Grouping and Indexing Color Features for Efficient Image Retrieval
Authors: M. V. Sudhamani, C. R. Venugopal
Abstract:
Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.
Keywords: Content-based, indexing, cluster, region.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811375 Encryption Efficiency Analysis and Security Evaluation of RC6 Block Cipher for Digital Images
Authors: Hossam El-din H. Ahmed, Hamdy M. Kalash, Osama S. Farag Allah
Abstract:
This paper investigates the encryption efficiency of RC6 block cipher application to digital images, providing a new mathematical measure for encryption efficiency, which we will call the encryption quality instead of visual inspection, The encryption quality of RC6 block cipher is investigated among its several design parameters such as word size, number of rounds, and secret key length and the optimal choices for the best values of such design parameters are given. Also, the security analysis of RC6 block cipher for digital images is investigated from strict cryptographic viewpoint. The security estimations of RC6 block cipher for digital images against brute-force, statistical, and differential attacks are explored. Experiments are made to test the security of RC6 block cipher for digital images against all aforementioned types of attacks. Experiments and results verify and prove that RC6 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC6 block cipher algorithm. So, RC6 block cipher can be considered to be a real-time secure symmetric encryption for digital images.
Keywords: Block cipher, Image encryption, Encryption quality, and Security analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425374 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings
Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies
Abstract:
Average temperatures worldwide are expected to continue to rise. At the same time, major cities in developing countries are becoming increasingly populated and polluted. Governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of a model, which is able to estimate the occupant exposure to extreme temperatures and high air pollution within domestic buildings. Building physics simulations were performed using the EnergyPlus building physics software. An accurate metamodel is then formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) have been compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.Keywords: Neural Networks, Radial Basis Functions, Metamodelling, Python machine learning libraries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117373 A Grey-Fuzzy Controller for Optimization Technique in Wireless Networks
Authors: Yao-Tien Wang, Hsiang-Fu Yu, Dung Chen Chiou
Abstract:
In wireless and mobile communications, this progress provides opportunities for introducing new standards and improving existing services. Supporting multimedia traffic with wireless networks quality of service (QoS). In this paper, a grey-fuzzy controller for radio resource management (GF-RRM) is presented to maximize the number of the served calls and QoS provision in wireless networks. In a wireless network, the call arrival rate, the call duration and the communication overhead between the base stations and the control center are vague and uncertain. In this paper, we develop a method to predict the cell load and to solve the RRM problem based on the GF-RRM, and support the present facility has been built on the application-level of the wireless networks. The GF-RRM exhibits the better adaptability, fault-tolerant capability and performance than other algorithms. Through simulations, we evaluate the blocking rate, update overhead, and channel acquisition delay time of the proposed method. The results demonstrate our algorithm has the lower blocking rate, less updated overhead, and shorter channel acquisition delay.Keywords: radio resource management, grey prediction, fuzzylogic control, wireless networks, quality of service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717372 Improved Modulo 2n +1 Adder Design
Authors: Somayeh Timarchi, Keivan Navi
Abstract:
Efficient modulo 2n+1 adders are important for several applications including residue number system, digital signal processors and cryptography algorithms. In this paper we present a novel modulo 2n+1 addition algorithm for a recently represented number system. The proposed approach is introduced for the reduction of the power dissipated. In a conventional modulo 2n+1 adder, all operands have (n+1)-bit length. To avoid using (n+1)-bit circuits, the diminished-1 and carry save diminished-1 number systems can be effectively used in applications. In the paper, we also derive two new architectures for designing modulo 2n+1 adder, based on n-bit ripple-carry adder. The first architecture is a faster design whereas the second one uses less hardware. In the proposed method, the special treatment required for zero operands in Diminished-1 number system is removed. In the fastest modulo 2n+1 adders in normal binary system, there are 3-operand adders. This problem is also resolved in this paper. The proposed architectures are compared with some efficient adders based on ripple-carry adder and highspeed adder. It is shown that the hardware overhead and power consumption will be reduced. As well as power reduction, in some cases, power-delay product will be also reduced.Keywords: Modulo 2n+1 arithmetic, residue number system, low power, ripple-carry adders.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2904