Search results for: optimal digital signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10291

Search results for: optimal digital signal processing

8911 Review on Future Economic Potential Stems from Global Electronic Waste Generation and Sustainable Recycling Practices.

Authors: Shamim Ahsan

Abstract:

Abstract Global digital advances associated with consumer’s strong inclination for the state of art digital technologies is causing overwhelming social and environmental challenges for global community. During recent years not only economic advances of electronic industries has taken place at steadfast rate, also the generation of e-waste outshined the growth of any other types of wastes. The estimated global e-waste volume is expected to reach 65.4 million tons annually by 2017. Formal recycling practices in developed countries are stemming economic liability, opening paths for illegal trafficking to developing countries. Informal crude management of large volume of e-waste is transforming into an emergent environmental and health challenge in. Contrariwise, in several studies formal and informal recycling of e-waste has also exhibited potentials for economic returns both in developed and developing countries. Some research on China illustrated that from large volume of e-wastes generation there are recycling potential in evolving from ∼16 (10−22) billion US$ in 2010, to an anticipated ∼73.4 (44.5−103.4) billion US$ by 2030. While in another study, researcher found from an economic analysis of 14 common categories of waste electric and electronic equipment (WEEE) the overall worth is calculated as €2.15 billion to European markets, with a potential rise to €3.67 billion as volumes increase. These economic returns and environmental protection approaches are feasible only when sustainable policy options are embraced with stricter regulatory mechanism. This study will critically review current researches to stipulate how global e-waste generation and sustainable e-waste recycling practices demonstrate future economic development potential in terms of both quantity and processing capacity, also triggering complex some environmental challenges.

Keywords: E-Waste, , Generation, , Economic Potential, Recycling

Procedia PDF Downloads 307
8910 Influence of the Paint Coating Thickness in Digital Image Correlation Experiments

Authors: Jesús A. Pérez, Sam Coppieters, Dimitri Debruyne

Abstract:

In the past decade, the use of digital image correlation (DIC) techniques has increased significantly in the area of experimental mechanics, especially for materials behavior characterization. This non-contact tool enables full field displacement and strain measurements over a complete region of interest. The DIC algorithm requires a random contrast pattern on the surface of the specimen in order to perform properly. To create this pattern, the specimen is usually first coated using a white matt paint. Next, a black random speckle pattern is applied using any suitable method. If the applied paint coating is too thick, its top surface may not be able to exactly follow the deformation of the specimen, and consequently, the strain measurement might be underestimated. In the present article, a study of the influence of the paint thickness on the strain underestimation is performed for different strain levels. The results are then compared to typical paint coating thicknesses applied by experienced DIC users. A slight strain underestimation was observed for paint coatings thicker than about 30μm. On the other hand, this value was found to be uncommonly high compared to coating thicknesses applied by DIC users.

Keywords: digital image correlation, paint coating thickness, strain

Procedia PDF Downloads 516
8909 A Process of Forming a Single Competitive Factor in the Digital Camera Industry

Authors: Kiyohiro Yamazaki

Abstract:

This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.

Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors

Procedia PDF Downloads 159
8908 Application of Analytical Method for Placement of DG Unit for Loss Reduction in Distribution Systems

Authors: G. V. Siva Krishna Rao, B. Srinivasa Rao

Abstract:

The main aim of the paper is to implement a technique using distributed generation in distribution systems to reduce the distribution system losses and to improve voltage profiles. The fuzzy logic technique is used to select the proper location of DG and an analytical method is proposed to calculate the size of DG unit at any power factor. The optimal sizes of DG units are compared with optimal sizes obtained using the genetic algorithm. The suggested method is programmed under Matlab software and is tested on IEEE 33 bus system and the results are presented.

Keywords: DG Units, sizing of DG units, analytical methods, optimum size

Procedia PDF Downloads 474
8907 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals

Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya

Abstract:

The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.

Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses

Procedia PDF Downloads 327
8906 Exploring Augmented Reality in Graphic Design: A Hybrid Pedagogical Model for Design Education

Authors: Nan Hu, Wujun Wang

Abstract:

In the ever-changing digital arena, augmented reality (AR) applications have transitioned from technological enthusiasm into business endeavors, signaling a near future in which AR applications are integrated into daily life. While practitioners in the design industry continue to explore AR’s potential for innovative communication, educators have taken steps to incorporate AR into the curricula for design, explore its creative potential, and realize early initiatives for teaching AR in design-related disciplines. In alignment with recent advancements, this paper presents a pedagogical model for a hybrid studio course in which students collaborate with AR alongside 3D modeling and graphic design. The course extended students’ digital capacity, fostered their design thinking skills, and immersed them in a multidisciplinary design process. This paper outlines the course and evaluates its effectiveness by discussing challenges encountered and outcomes generated in this particular pedagogical context. By sharing insights from the teaching experience, we aim to empower the community of design educators and offer institutions a valuable reference for advancing their curricular approaches. This paper is a testament to the ever-evolving landscape of design education and its response to the digital age.

Keywords: 3D, AR, augmented reality, design thinking, graphic design

Procedia PDF Downloads 72
8905 Fractional Order Differentiator Using Chebyshev Polynomials

Authors: Koushlendra Kumar Singh, Manish Kumar Bajpai, Rajesh Kumar Pandey

Abstract:

A discrete time fractional orderdifferentiator has been modeled for estimating the fractional order derivatives of contaminated signal. The proposed approach is based on Chebyshev’s polynomials. We use the Riemann-Liouville fractional order derivative definition for designing the fractional order SG differentiator. In first step we calculate the window weight corresponding to the required fractional order. Then signal is convoluted with this calculated window’s weight for finding the fractional order derivatives of signals. Several signals are considered for evaluating the accuracy of the proposed method.

Keywords: fractional order derivative, chebyshev polynomials, signals, S-G differentiator

Procedia PDF Downloads 649
8904 Optimal Allocation of Oil Rents and Public Investment In Low-Income Developing Countries: A Computable General Equilibrium Analysis

Authors: Paule Olivia Akotto

Abstract:

The recent literature suggests spending between 50%-85% of oil rents. However, there are not yet clear guidelines for allocating this windfall in the public investment system, while most of the resource-rich countries fail to improve their intergenerational mobility. We study a design of the optimal spending system in Senegal, a low-income developing country featuring newly discovered oil fields and low intergenerational mobility. We build a dynamic general equilibrium model in which rural and urban (Dakar and other urban centers henceforth OUC) households face different health, education, and employment opportunities based on their location, affecting their intergenerational mobility. The model captures the relationship between oil rents, public investment, and multidimensional inequality of opportunity. The government invests oil rents in three broad sectors: health and education, road and industries, and agriculture. Through endogenous productivity externality and human capital accumulation, our model generates the predominant position of Dakar and OUC households in terms of access to health, education, and employment in line with Senegal data. Rural households are worse off in all dimensions. We compute the optimal spending policy under two sets of simulation scenarios. Under the current Senegal public investment strategy, which weighs more health and education investments, we find that the reform maximizing the decline in inequality of opportunity between households, frontloads investment during the first eight years of the oil exploitation and spends the perpetual value of oil wealth thereafter. We will then identify the marginal winners and losers associated with this policy and its redistributive implications. Under our second set of scenarios, we will test whether the Senegalese economy can reach better equality of opportunity outcomes under this frontloading reform, by allowing the sectoral shares of investment to vary. The trade-off will be between cutting human capital investment in favor of agricultural and productive infrastructure or increasing the former. We will characterize the optimal policy by specifying where the higher weight should be. We expect that the optimal policy of the second set strictly dominates in terms of equality of opportunity, the optimal policy computed under the current investment strategy. Finally, we will quantify this optimal policy's aggregate and distributional effects on poverty, well-being, and gender earning gaps.

Keywords: developing countries, general equilibrium, inequality of opportunity, oil rents

Procedia PDF Downloads 239
8903 Food Processing Technology and Packaging: A Case Study of Indian Cashew-Nut Industry

Authors: Parashram Jakappa Patil

Abstract:

India is the global leader in world cashew business and cashew-nut industry is one of the important food processing industries in world. However India is the largest producer, processor, exporter and importer eschew in the world. India is providing cashew to the rest of the world. India is meeting world demand of cashew. India has a tremendous potential of cashew production and export to other countries. Every year India earns more than 2000 cores rupees through cashew trade. Cashew industry is one of the important small scale industries in the country which is playing significant role in rural development. It is generating more than 400000 jobs at remote area and 95% cashew worker are women, it is giving income to poor cashew farmers, majority cashew processing units are small and cottage, it is helping to stop migration from young farmers for employment opportunities, it is motivation rural entrepreneurship development and it is also helping to environment protection etc. Hence India cashew business is very important agribusiness in India which has potential make inclusive development. World Bank and IMF recognized cashew-nut industry is one the important tool for poverty eradication at global level. It shows important of cashew business and its strong existence in India. In spite of such huge potential cashew processing industry is facing different problems such as lack of infrastructure ability, lack of supply of raw cashew, lack of availability of finance, collection of raw cashew, unavailability of warehouse, marketing of cashew kernels, lack of technical knowledge and especially processing technology and packaging of finished products. This industry has great prospects such as scope for more cashew cultivation and cashew production, employment generation, formation of cashew processing units, alcohols production from cashew apple, shield oil production, rural development, poverty elimination, development of social and economic backward class and environment protection etc. This industry has domestic as well as foreign market; India has tremendous potential in this regard. The cashew is a poor men’s crop but rich men’s food. The cashew is a source of income and livelihood for poor farmers. Cashew-nut industry may play very important role in the development of hilly region. The objectives of this paper are to identify problems of cashew processing and use of processing technology, problems of cashew kernel packaging, evolving of cashew processing technology over the year and its impact on final product and impact of good processing by adopting appropriate technology packaging on international trade of cashew-nut. The most important problem of cashew processing industry is that is processing and packaging. Bad processing reduce the quality of cashew kernel at large extent especially broken of cashew kernel which has very less price in market compare to whole cashew kernel and not eligible for export. On the other hand if there is no good packaging of cashew kernel will get moisture which destroy test of it. International trade of cashew-nut is depend of two things one is cashew processing and other is packaging. This study has strong relevance because cashew-nut industry is the labour oriented, where processing technology is not playing important role because 95% processing work is manual. Hence processing work was depending on physical performance of worker which makes presence of large workforce inevitable. There are many cashew processing units closed because they are not getting sufficient work force. However due to advancement in technology slowly this picture is changing and processing work get improve. Therefore it is interesting to explore all the aspects in context of cashew processing and packaging of cashew business.

Keywords: cashew, processing technology, packaging, international trade, change

Procedia PDF Downloads 423
8902 Digital Preservation in Nigeria Universities Libraries: A Comparison between University of Nigeria Nsukka and Ahmadu Bello University Zaria

Authors: Suleiman Musa, Shuaibu Sidi Safiyanu

Abstract:

This study examined the digital preservation in Nigeria university libraries. A comparison between the university of Nigeria Nsukka (UNN) and Ahmadu Bello University Zaria (ABU, Zaria). The study utilized primary source of data obtained from two selected institution librarians. Finding revealed varying results in terms of skills acquired by librarians before and after digitization of the two institutions. The study reports that journals publication, text book, CD-ROMS, conference papers and proceedings, theses, dissertations and seminar papers are among the information resources available for digitization. The study further documents that copyright issue, power failure, and unavailability of needed materials are among the challenges facing the digitization of library of the institution. On the basis of the finding, the study concluded that digitization of library enhances efficiency in organization and retrieval of information services. The study therefore recommended that software should be upgraded with backup, training of the librarians on digital process, installation of antivirus and enhancement of technical collaboration between the library and MIS.

Keywords: digitalization, preservation, libraries, comparison

Procedia PDF Downloads 341
8901 Evaluation of Three Digital Graphical Methods of Baseflow Separation Techniques in the Tekeze Water Basin in Ethiopia

Authors: Alebachew Halefom, Navsal Kumar, Arunava Poddar

Abstract:

The purpose of this work is to specify the parameter values, the base flow index (BFI), and to rank the methods that should be used for base flow separation. Three different digital graphical approaches are chosen and used in this study for the purpose of comparison. The daily time series discharge data were collected from the site for a period of 30 years (1986 up to 2015) and were used to evaluate the algorithms. In order to separate the base flow and the surface runoff, daily recorded streamflow (m³/s) data were used to calibrate procedures and get parameter values for the basin. Additionally, the performance of the model was assessed by the use of the standard error (SE), the coefficient of determination (R²), and the flow duration curve (FDC) and baseflow indexes. The findings indicate that, in general, each strategy can be used worldwide to differentiate base flow; however, the Sliding Interval Method (SIM) performs significantly better than the other two techniques in this basin. The average base flow index was calculated to be 0.72 using the local minimum method, 0.76 using the fixed interval method, and 0.78 using the sliding interval method, respectively.

Keywords: baseflow index, digital graphical methods, streamflow, Emba Madre Watershed

Procedia PDF Downloads 83
8900 Investigation of Amorphous Silicon A-Si Thin Films Deposited on Silicon Substrate by Raman Spectroscopy

Authors: Amirouche Hammouda, Nacer Boucherou, Aicha Ziouche, Hayet Boudjellal

Abstract:

Silicon has excellent physical and electrical properties for optoelectronics industry. It is a promising material with many advantages. On Raman characterization of thin films deposited on crystalline silicon substrate, the signal Raman of amorphous silicon is often disturbed by the Raman signal of the crystalline silicon substrate. In this paper, we propose to characterize thin layers of amorphous silicon deposited on crystalline silicon substrates. The results obtained have shown the possibility to bring out the Raman spectrum of deposited layers by optimizing experimental parameters.

Keywords: raman scattering, amorphous silicon, crystalline silicon, thin films

Procedia PDF Downloads 76
8899 Design and Implementation of 2D Mesh Network on Chip Using VHDL

Authors: Boudjedra Abderrahim, Toumi Salah, Boutalbi Mostefa, Frihi Mohammed

Abstract:

Nowadays, using the advancement of technology in semiconductor device fabrication, many transistors can be integrated to a single chip (VLSI). Although the growth chip density potentially eases systems-on-chip (SoCs) integrating thousands of processing element (PE) such as memory, processor, interfaces cores, system complexity, high-performance interconnect and scalable on-chip communication architecture become most challenges for many digital and embedded system designers. Networks-on-chip (NoCs) becomes a new paradigm that makes possible integrating heterogeneous devices and allows many communication constraints and performances. In this paper, we are interested for good performance and low area for implementation and a behavioral modeling of network on chip mesh topology design using VHDL hardware description language with performance evaluation and FPGA implementation results.

Keywords: design, implementation, communication system, network on chip, VHDL

Procedia PDF Downloads 380
8898 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis

Authors: Iman Farasat, Howard M. Salis

Abstract:

Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.

Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement

Procedia PDF Downloads 475
8897 Optimal Design of Composite Patch for a Cracked Pipe by Utilizing Genetic Algorithm and Finite Element Method

Authors: Mahdi Fakoor, Seyed Mohammad Navid Ghoreishi

Abstract:

Composite patching is a common way for reinforcing the cracked pipes and cylinders. The effects of composite patch reinforcement on fracture parameters of a cracked pipe depend on a variety of parameters such as number of layers, angle, thickness, and material of each layer. Therefore, stacking sequence optimization of composite patch becomes crucial for the applications of cracked pipes. In this study, in order to obtain the optimal stacking sequence for a composite patch that has minimum weight and maximum resistance in propagation of cracks, a coupled Multi-Objective Genetic Algorithm (MOGA) and Finite Element Method (FEM) process is proposed. This optimization process has done for longitudinal and transverse semi-elliptical cracks and optimal stacking sequences and Pareto’s front for each kind of cracks are presented. The proposed algorithm is validated against collected results from the existing literature.

Keywords: multi objective optimization, pareto front, composite patch, cracked pipe

Procedia PDF Downloads 312
8896 Resource Constrained Time-Cost Trade-Off Analysis in Construction Project Planning and Control

Authors: Sangwon Han, Chengquan Jin

Abstract:

Time-cost trade-off (TCTO) is one of the most significant part of construction project management. Despite the significance, current TCTO analysis, based on the Critical Path Method, does not consider resource constraint, and accordingly sometimes generates an impractical and/or infeasible schedule planning in terms of resource availability. Therefore, resource constraint needs to be considered when doing TCTO analysis. In this research, genetic algorithms (GA) based optimization model is created in order to find the optimal schedule. This model is utilized to compare four distinct scenarios (i.e., 1) initial CPM, 2) TCTO without considering resource constraint, 3) resource allocation after TCTO, and 4) TCTO with considering resource constraint) in terms of duration, cost, and resource utilization. The comparison results identify that ‘TCTO with considering resource constraint’ generates the optimal schedule with the respect of duration, cost, and resource. This verifies the need for consideration of resource constraint when doing TCTO analysis. It is expected that the proposed model will produce more feasible and optimal schedule.

Keywords: time-cost trade-off, genetic algorithms, critical path, resource availability

Procedia PDF Downloads 189
8895 Continuous-Time and Discrete-Time Singular Value Decomposition of an Impulse Response Function

Authors: Rogelio Luck, Yucheng Liu

Abstract:

This paper proposes the continuous-time singular value decomposition (SVD) for the impulse response function, a special kind of Green’s functions e⁻⁽ᵗ⁻ ᵀ⁾, in order to find a set of singular functions and singular values so that the convolutions of such function with the set of singular functions on a specified domain are the solutions to the inhomogeneous differential equations for those singular functions. A numerical example was illustrated to verify the proposed method. Besides the continuous-time SVD, a discrete-time SVD is also presented for the impulse response function, which is modeled using a Toeplitz matrix in the discrete system. The proposed method has broad applications in signal processing, dynamic system analysis, acoustic analysis, thermal analysis, as well as macroeconomic modeling.

Keywords: singular value decomposition, impulse response function, Green’s function , Toeplitz matrix , Hankel matrix

Procedia PDF Downloads 156
8894 Statistical Physics Model of Seismic Activation Preceding a Major Earthquake

Authors: Daniel S. Brox

Abstract:

Starting from earthquake fault dynamic equations, a correspondence between earthquake occurrence statistics in a seismic region before a major earthquake and eigenvalue statistics of a differential operator whose bound state eigenfunctions characterize the distribution of stress in the seismic region is derived. Modeling these eigenvalue statistics with a 2D Coulomb gas statistical physics model, previously reported deviation of seismic activation earthquake occurrence statistics from Gutenberg-Richter statistics in time intervals preceding the major earthquake is derived. It also explains how statistical physics modeling predicts a finite-dimensional nonlinear dynamic system that describes real-time velocity model evolution in the region undergoing seismic activation and how this prediction can be tested experimentally.

Keywords: seismic activation, statistical physics, geodynamics, signal processing

Procedia PDF Downloads 23
8893 A Novel Software Model for Enhancement of System Performance and Security through an Optimal Placement of PMU and FACTS

Authors: R. Kiran, B. R. Lakshmikantha, R. V. Parimala

Abstract:

Secure operation of power systems requires monitoring of the system operating conditions. Phasor measurement units (PMU) are the device, which uses synchronized signals from the GPS satellites, and provide the phasors information of voltage and currents at a given substation. The optimal locations for the PMUs must be determined, in order to avoid redundant use of PMUs. The objective of this paper is to make system observable by using minimum number of PMUs & the implementation of stability software at 22OkV grid for on-line estimation of the power system transfer capability based on voltage and thermal limitations and for security monitoring. This software utilizes State Estimator (SE) and synchrophasor PMU data sets for determining the power system operational margin under normal and contingency conditions. This software improves security of transmission system by continuously monitoring operational margin expressed in MW or in bus voltage angles, and alarms the operator if the margin violates a pre-defined threshold.

Keywords: state estimator (SE), flexible ac transmission systems (FACTS), optimal location, phasor measurement units (PMU)

Procedia PDF Downloads 412
8892 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 129
8891 Multiscale Connected Component Labelling and Applications to Scientific Microscopy Image Processing

Authors: Yayun Hsu, Henry Horng-Shing Lu

Abstract:

In this paper, a new method is proposed to extending the method of connected component labeling from processing binary images to multi-scale modeling of images. By using the adaptive threshold of multi-scale attributes, this approach minimizes the possibility of missing those important components with weak intensities. In addition, the computational cost of this approach remains similar to that of the typical approach of component labeling. Then, this methodology is applied to grain boundary detection and Drosophila Brain-bow neuron segmentation. These demonstrate the feasibility of the proposed approach in the analysis of challenging microscopy images for scientific discovery.

Keywords: microscopic image processing, scientific data mining, multi-scale modeling, data mining

Procedia PDF Downloads 440
8890 Data Access, AI Intensity, and Scale Advantages

Authors: Chuping Lo

Abstract:

This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.

Keywords: digital intensity, digital divide, international trade, scale of economics

Procedia PDF Downloads 69
8889 Preserving Urban Cultural Heritage with Deep Learning: Color Planning for Japanese Merchant Towns

Authors: Dongqi Li, Yunjia Huang, Tomo Inoue, Kohei Inoue

Abstract:

With urbanization, urban cultural heritage is facing the impact and destruction of modernization and urbanization. Many historical areas are losing their historical information and regional cultural characteristics, so it is necessary to carry out systematic color planning for historical areas in conservation. As an early focus on urban color planning, Japan has a systematic approach to urban color planning. Hence, this paper selects five merchant towns from the category of important traditional building preservation areas in Japan as the subject of this study to explore the color structure and emotion of this type of historic area. First, the image semantic segmentation method identifies the buildings, roads, and landscape environments. Their color data were extracted for color composition and emotion analysis to summarize their common features. Second, the obtained Internet evaluations were extracted by natural language processing for keyword extraction. The correlation analysis of the color structure and keywords provides a valuable reference for conservation decisions for this historic area in the town. This paper also combines the color structure and Internet evaluation results with generative adversarial networks to generate predicted images of color structure improvements and color improvement schemes. The methods and conclusions of this paper can provide new ideas for the digital management of environmental colors in historic districts and provide a valuable reference for the inheritance of local traditional culture.

Keywords: historic districts, color planning, semantic segmentation, natural language processing

Procedia PDF Downloads 89
8888 Optimization of Robot Motion Planning Using Biogeography Based Optimization (Bbo)

Authors: Jaber Nikpouri, Arsalan Amralizadeh

Abstract:

In robotics manipulators, the trajectory should be optimum, thus the torque of the robot can be minimized in order to save power. This paper includes an optimal path planning scheme for a robotic manipulator. Recently, techniques based on metaheuristics of natural computing, mainly evolutionary algorithms (EA), have been successfully applied to a large number of robotic applications. In this paper, the improved BBO algorithm is used to minimize the objective function in the presence of different obstacles. The simulation represents that the proposed optimal path planning method has satisfactory performance.

Keywords: biogeography-based optimization, path planning, obstacle detection, robotic manipulator

Procedia PDF Downloads 307
8887 Age Estimation Using Atlas Method with Orthopantomogram and Digital Tracing on Lateral Cephalogram

Authors: Astika Swastirani

Abstract:

Chronological age estimation can be done by looking at the stage of growth and development of teeth from orthopantomogram and mandibular remodeling from lateral cephalogram. Mandibular morphological changes associated with the size and remodeling during growth is a strong indicator of age estimation. These changes can be observed with lateral cephalogram. Objective: To prove the difference between chronological age and age estimation using orthopantomogram (dental age) and lateral cephalogram (skeletal age). Methods: Sample consisted of 100 medical records, 100 orthopantomograms digital and 100 lateral cephalograms digital belongs to 50 male and 50 female of Airlangga University hospital of dentistry. Orthopantomogram were matched with London atlas and lateral cephalograms were observed by digital tracing. The difference of dental age and skeletal age was analyzed by pair t –test. Result: Result of the pair t-test between chronological age and dental age in male (p-value 0.002, p<0.05), in female (p-value 0.605, p>0.05). Result of pair t-test between the chronological age and skeletal age (variable length Condylion-Gonion, Gonion-Gnathion, Condylion-Gnathion in male (p-value 0.000, p<0.05) in female (variable Condylion-Gonion length (p-value 0.000, Condylion-Gnathion length (p-value 0,040) and Gonion-Gnathion length (p-value 0.493). Conclusion: Orthopantomogram with London atlas and lateral cephalograms with Gonion- Gnathion variable can be used for age estimation in female. Orthopantomogram with London atlas and lateral cephalograms with Condylion-Gonion variable, Gonion-Gnathion variable and Condylion-Gnathion can not be used for age estimation in male.

Keywords: age estimation, chronological age, dental age, skeletal age

Procedia PDF Downloads 170
8886 A Benchmark System for Testing Medium Voltage Direct Current (MVDC-CB) Robustness Utilizing Real Time Digital Simulation and Hardware-In-Loop Theory

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

The integration of green energy resources is a major focus, and the role of Medium Voltage Direct Current (MVDC) systems is exponentially expanding. However, the protection of MVDC systems against DC faults is a challenge that can have consequences on reliable and safe grid operation. This challenge reveals the need for MVDC circuit breakers (MVDC CB), which are in infancies of their improvement. Therefore will be a lack of MVDC CBs standards, including thresholds for acceptable power losses and operation speed. To establish a baseline for comparison purposes, a benchmark system for testing future MVDC CBs is vital. The literatures just give the timing sequence of each switch and the emphasis is on the topology, without in-depth study on the control algorithm of DCCB, as the circuit breaker control system is not yet systematic. A digital testing benchmark is designed for the Proof-of-concept of simulation studies using software models. It can validate studies based on real-time digital simulators and Transient Network Analyzer (TNA) models. The proposed experimental setup utilizes data accusation from the accurate sensors installed on the tested MVDC CB and through general purpose input/outputs (GPIO) from the microcontroller and PC Prototype studies in the laboratory-based models utilizing Hardware-in-the-Loop (HIL) equipment connected to real-time digital simulators is achieved. The improved control algorithm of the circuit breaker can reduce the peak fault current and avoid arc resignation, helping the coordination of DCCB in relay protection. Moreover, several research gaps are identified regarding case studies and evaluation approaches.

Keywords: DC circuit breaker, hardware-in-the-loop, real time digital simulation, testing benchmark

Procedia PDF Downloads 81
8885 Training for Digital Manufacturing: A Multilevel Teaching Model

Authors: Luís Rocha, Adam Gąska, Enrico Savio, Michael Marxer, Christoph Battaglia

Abstract:

The changes observed in the last years in the field of manufacturing and production engineering, popularly known as "Fourth Industry Revolution", utilizes the achievements in the different areas of computer sciences, introducing new solutions at almost every stage of the production process, just to mention such concepts as mass customization, cloud computing, knowledge-based engineering, virtual reality, rapid prototyping, or virtual models of measuring systems. To effectively speed up the production process and make it more flexible, it is necessary to tighten the bonds connecting individual stages of the production process and to raise the awareness and knowledge of employees of individual sectors about the nature and specificity of work in other stages. It is important to discover and develop a suitable education method adapted to the specificities of each stage of the production process, becoming an extremely crucial issue to exploit the potential of the fourth industrial revolution properly. Because of it, the project “Train4Dim” (T4D) intends to develop complex training material for digital manufacturing, including content for design, manufacturing, and quality control, with a focus on coordinate metrology and portable measuring systems. In this paper, the authors present an approach to using an active learning methodology for digital manufacturing. T4D main objective is to develop a multi-degree (apprenticeship up to master’s degree studies) and educational approach that can be adapted to different teaching levels. It’s also described the process of creating the underneath methodology. The paper will share the steps to achieve the aims of the project (training model for digital manufacturing): 1) surveying the stakeholders, 2) Defining the learning aims, 3) producing all contents and curriculum, 4) training for tutors, and 5) Pilot courses test and improvements.

Keywords: learning, Industry 4.0, active learning, digital manufacturing

Procedia PDF Downloads 100
8884 Navigating Disruption: Key Principles and Innovations in Modern Management for Organizational Success

Authors: Ahmad Haidar

Abstract:

This research paper investigates the concept of modern management, concentrating on the development of managerial practices and the adoption of innovative strategies in response to the fast-changing business landscape caused by Artificial Intelligence (AI). The study begins by examining the historical context of management theories, tracing the progression from classical to contemporary models, and identifying key drivers of change. Through a comprehensive review of existing literature and case studies, this paper provides valuable insights into the principles and practices of modern management, offering a roadmap for organizations aiming to navigate the complexities of the contemporary business world. The paper examines the growing role of digital technology in modern management, focusing on incorporating AI, machine learning, and data analytics to streamline operations and facilitate informed decision-making. Moreover, the research highlights the emergence of new principles, such as adaptability, flexibility, public participation, trust, transparency, and digital mindset, as crucial components of modern management. Also, the role of business leaders is investigated by studying contemporary leadership styles, such as transformational, situational, and servant leadership, emphasizing the significance of emotional intelligence, empathy, and collaboration in fostering a healthy organizational culture. Furthermore, the research delves into the crucial role of environmental sustainability, corporate social responsibility (CSR), and corporate digital responsibility (CDR). Organizations strive to balance economic growth with ethical considerations and long-term viability. The primary research question for this study is: "What are the key principles, practices, and innovations that define modern management, and how can organizations effectively implement these strategies to thrive in the rapidly changing business landscape?." The research contributes to a comprehensive understanding of modern management by examining its historical context, the impact of digital technologies, the importance of contemporary leadership styles, and the role of CSR and CDR in today's business landscape.

Keywords: modern management, digital technology, leadership styles, adaptability, innovation, corporate social responsibility, organizational success, corporate digital responsibility

Procedia PDF Downloads 69
8883 Albumin-Induced Turn-on Fluorescence in Molecular Engineered Fluorescent Probe for Biomedical Application

Authors: Raja Chinnappan, Huda Alanazi, Shanmugam Easwaramoorthi, Tanveer Mir, Balamurugan Kanagasabai, Ahmed Yaqinuddin, Sandhanasamy Devanesan, Mohamad S. AlSalhi

Abstract:

Serum albumin (SA) is a highly rich water-soluble protein in plasma. It is known to maintain the living organisms' health and help to maintain the proper liver function, kidney function, and plasma osmolality in the body. Low levels of serum albumin are an indication of liver failure and chronic hepatitis. Therefore, it is important to have a low-cost, accurate and rapid method. In this study, we designed a fluorescent probe, triphenylamine rhodanine-3-acetic acid (mRA), which triggers the fluorescence signal upon binding with serum albumin (SA). mRA is a bifunctional molecule with twisted intramolecular charge transfer (TICT)-induced emission characteristics. An aqueous solution of mRA has an insignificant fluorescence signal; however, when mRA binds to SA, it undergoes TICT and turns on the fluorescence emission. A SA dose-dependent fluorescence signal was performed, and the limit of detection was found to be less than ng/mL. The specific binding of SA was tested from the cross-reactivity study using similar structural or functional proteins.

Keywords: serum albumin, fluorescent sensing probe, liver diseases, twisted intramolecular charge transfer

Procedia PDF Downloads 22
8882 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 405