Search results for: Adaptive Discharge Time Control.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9950

Search results for: Adaptive Discharge Time Control.

920 Identifying the Strength of Cyclones and Earthquakes Requiring Military Disaster Response

Authors: Chad A. Long

Abstract:

The United States military is now commonly responding to complex humanitarian emergencies and natural disasters around the world. From catastrophic earthquakes in Haiti to typhoons devastating the Philippines, U.S. military assistance is requested when the event exceeds the local government's ability to assist the population. This study assesses the characteristics of catastrophes that surpass a nation’s individual ability to respond and recover from the event. The paper begins with a historical summary of military aid and then analyzes over 40 years of the United States military humanitarian response. Over 300 military operations were reviewed and coded based on the nature of the disaster. This in-depth study reviewed the U.S. military’s deployment events for cyclones and earthquakes to determine the strength of the natural disaster requiring external assistance. The climatological data for cyclone landfall and magnitude data for earthquake epicenters were identified, grouped into regions and analyzed for time-based trends. The results showed that foreign countries will likely request the U.S. military for cyclones with speeds greater or equal to 125 miles an hour and earthquakes at the magnitude of 7.4 or higher. These results of this study will assist the geographic combatant commands in determining future military response requirements.

Keywords: Cyclones, earthquakes, natural disasters, military.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 678
919 Integrating Artificial Neural Network and Taguchi Method on Constructing the Real Estate Appraisal Model

Authors: Mu-Yen Chen, Min-Hsuan Fan, Chia-Chen Chen, Siang-Yu Jhong

Abstract:

In recent years, real estate prediction or valuation has been a topic of discussion in many developed countries. Improper hype created by investors leads to fluctuating prices of real estate, affecting many consumers to purchase their own homes. Therefore, scholars from various countries have conducted research in real estate valuation and prediction. With the back-propagation neural network that has been popular in recent years and the orthogonal array in the Taguchi method, this study aimed to find the optimal parameter combination at different levels of orthogonal array after the system presented different parameter combinations, so that the artificial neural network obtained the most accurate results. The experimental results also demonstrated that the method presented in the study had a better result than traditional machine learning. Finally, it also showed that the model proposed in this study had the optimal predictive effect, and could significantly reduce the cost of time in simulation operation. The best predictive results could be found with a fewer number of experiments more efficiently. Thus users could predict a real estate transaction price that is not far from the current actual prices.

Keywords: Artificial Neural Network, Taguchi Method, Real Estate Valuation Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3047
918 The Impacts of Local Decision Making on Customisation Process Speed across Distributed Boundaries: A Case Study

Authors: A. M. Qahtani, G. B. Wills, A. M. Gravell

Abstract:

Communicating and managing customers’ requirements in software development projects play a vital role in the software development process. While it is difficult to do so locally, it is even more difficult to communicate these requirements over distributed boundaries and to convey them to multiple distribution customers. This paper discusses the communication of multiple distribution customers’ requirements in the context of customised software products. The main purpose is to understand the challenges of communicating and managing customisation requirements across distributed boundaries. We propose a model for Communicating Customisation Requirements of Multi-Clients in a Distributed Domain (CCRD). Thereafter, we evaluate that model by presenting the findings of a case study conducted with a company with customisation projects for 18 distributed customers. Then, we compare the outputs of the real case process and the outputs of the CCRD model using simulation methods. Our conjecture is that the CCRD model can reduce the challenge of communication requirements over distributed organisational boundaries, and the delay in decision making and in the entire customisation process time.

Keywords: Customisation Software Products, Global Software Engineering, Local Decision Making, Requirement Engineering, Simulation Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
917 Investigating Crime Hotspot Places and their Implication to Urban Environmental Design: A Geographic Visualization and Data Mining Approach

Authors: Donna R. Tabangin, Jacqueline C. Flores, Nelson F. Emperador

Abstract:

Information is power. Geographical information is an emerging science that is advancing the development of knowledge to further help in the understanding of the relationship of “place" with other disciplines such as crime. The researchers used crime data for the years 2004 to 2007 from the Baguio City Police Office to determine the incidence and actual locations of crime hotspots. Combined qualitative and quantitative research methodology was employed through extensive fieldwork and observation, geographic visualization with Geographic Information Systems (GIS) and Global Positioning Systems (GPS), and data mining. The paper discusses emerging geographic visualization and data mining tools and methodologies that can be used to generate baseline data for environmental initiatives such as urban renewal and rejuvenation. The study was able to demonstrate that crime hotspots can be computed and were seen to be occurring to some select places in the Central Business District (CBD) of Baguio City. It was observed that some characteristics of the hotspot places- physical design and milieu may play an important role in creating opportunities for crime. A list of these environmental attributes was generated. This derived information may be used to guide the design or redesign of the urban environment of the City to be able to reduce crime and at the same time improve it physically.

Keywords: Crime mapping, data mining, environmental design, geographic visualization, GIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2584
916 Large Eddy Simulation of Compartment Fire with Gas Combustible

Authors: Mliki Bouchmel, Abbassi Mohamed Ammar, Kamel Geudri, Chrigui Mouldi, Omri Ahmed

Abstract:

The objective of this work is to use the Fire Dynamics Simulator (FDS) to investigate the behavior of a kerosene small-scale fire. FDS is a Computational Fluid Dynamics (CFD) tool developed specifically for fire applications. Throughout its development, FDS is used for the resolution of practical problems in fire protection engineering. At the same time FDS is used to study fundamental fire dynamics and combustion. Predictions are based on Large Eddy Simulation (LES) with a Smagorinsky turbulence model. LES directly computes the large-scale eddies and the sub-grid scale dissipative processes are modeled. This technique is the default turbulence model which was used in this study. The validation of the numerical prediction is done using a direct comparison of combustion output variables to experimental measurements. Effect of the mesh size on the temperature evolutions is investigated and optimum grid size is suggested. Effect of width openings is investigated. Temperature distribution and species flow are presented for different operating conditions. The effect of the composition of the used fuel on atmospheric pollution is also a focus point within this work. Good predictions are obtained where the size of the computational cells within the fire compartment is less than 1/10th of the characteristic fire diameter.

Keywords: Large eddy simulation, Radiation, Turbulence, combustion, pollution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
915 Increase of Organization in Complex Systems

Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan

Abstract:

Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.

Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
914 Temporal Signal Processing by Inference Bayesian Approach for Detection of Abrupt Variation of Statistical Characteristics of Noisy Signals

Authors: Farhad Asadi, Hossein Sadati

Abstract:

In fields such as neuroscience and especially in cognition modeling of mental processes, uncertainty processing in temporal zone of signal is vital. In this paper, Bayesian online inferences in estimation of change-points location in signal are constructed. This method separated the observed signal into independent series and studies the change and variation of the regime of data locally with related statistical characteristics. We give conditions on simulations of the method when the data characteristics of signals vary, and provide empirical evidence to show the performance of method. It is verified that correlation between series around the change point location and its characteristics such as Signal to Noise Ratios and mean value of signal has important factor on fluctuating in finding proper location of change point. And one of the main contributions of this study is related to representing of these influences of signal statistical characteristics for finding abrupt variation in signal. There are two different structures for simulations which in first case one abrupt change in temporal section of signal is considered with variable position and secondly multiple variations are considered. Finally, influence of statistical characteristic for changing the location of change point is explained in details in simulation results with different artificial signals.

Keywords: Time series, fluctuation in statistical characteristics, optimal learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 541
913 System for Monitoring Marine Turtles Using Unstructured Supplementary Service Data

Authors: Luís Pina

Abstract:

The conservation of marine biodiversity keeps ecosystems in balance and ensures the sustainable use of resources. In this context, technological resources have been used for monitoring marine species to allow biologists to obtain data in real-time. There are different mobile applications developed for data collection for monitoring purposes, but these systems are designed to be utilized only on third-generation (3G) phones or smartphones with Internet access and in rural parts of the developing countries, Internet services and smartphones are scarce. Thus, the objective of this work is to develop a system to monitor marine turtles using Unstructured Supplementary Service Data (USSD), which users can access through basic mobile phones. The system aims to improve the data collection mechanism and enhance the effectiveness of current systems in monitoring sea turtles using any type of mobile device without Internet access. The system will be able to report information related to the biological activities of marine turtles. Also, it will be used as a platform to assist marine conservation entities to receive reports of illegal sales of sea turtles. The system can also be utilized as an educational tool for communities, providing knowledge and allowing the inclusion of communities in the process of monitoring marine turtles. Therefore, this work may contribute with information to decision-making and implementation of contingency plans for marine conservation programs.

Keywords: GSM, marine biology, marine turtles, USSD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
912 Incorporation Mechanism of Stabilizing Simulated Lead-Laden Sludge in Aluminum-Rich Ceramics

Authors: Xingwen Lu, Kaimin Shih

Abstract:

This study investigated a strategy of blending lead-laden sludge and Al-rich precursors to reduce the release of metals from the stabilized products. Using PbO as the simulated lead-laden sludge to sinter with γ-Al2O3 by Pb:Al molar ratios of 1:2 and 1:12, PbAl2O4 and PbAl12O19 were formed as final products during the sintering process, respectively. By firing the PbO + γ-Al2O3 mixtures with different Pb/Al molar ratios at 600 to 1000 °C, the lead transformation was determined through X-ray diffraction (XRD) data. In Pb/Al molar ratio of 1/2 system, the formation of PbAl2O4 is initiated at 700 °C, but an effective formation was observed above 750 °C. An intermediate phase, Pb9Al8O21, was detected in the temperature range of 800-900 °C. However, different incorporation behavior for sintering PbO with Al-rich precursors at a Pb/Al molar ratio of 1/12 was observed during the formation of PbAl12O19 in this system. In the sintering process, both temperature and time effect on the formation of PbAl2O4 and PbAl12O19 phases were estimated. Finally, a prolonged leaching test modified from the U.S. Environmental Protection Agency-s toxicity characteristic leaching procedure (TCLP) was used to evaluate the durability of PbO, Pb9Al8O21, PbAl2O4 and PbAl12O19 phases. Comparison for the leaching results of the four phases demonstrated the higher intrinsic resistance of PbAl12O19 against acid attack.

Keywords: Sludge, Lead, Stabilization, Leaching behavior

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
911 The Evaluation of Antioxidant and Antimicrobial Activities of Essential Oil and Aqueous, Methanol, Ethanol, Ethyl Acetate and Acetone Extract of Hypericum scabrum

Authors: A. Heshmati, M. Y Alikhani, M. T. Godarzi, M. R. Sadeghimanesh

Abstract:

Herbal essential oil and extracts are a good source of natural antioxidants and antimicrobial compounds. Hypericum is one of the potential sources of these compounds. In this study, the antioxidant and antimicrobial activity of essential oil and aqueous, methanol, ethanol, ethyl acetate and acetone extract of Hypericum scabrum was assessed. Flowers of Hypericum scabrum were collected from the surrounding mountains of Hamadan province and after drying in the shade, the essential oil of the plant was extracted by Clevenger and water, methanol, ethanol, ethyl acetate and acetone extract was obtained by maceration method. Essential oil compounds were identified using the GC-Mass. The Folin-Ciocalteau and aluminum chloride (AlCl3) colorimetric method was used to measure the amount of phenolic acid and flavonoids, respectively. Antioxidant activity was evaluated using DPPH and FRAP. The minimum inhibitory concentration (MIC) and the minimum bacterial/fungicide concentration (MBC/MFC) of essential oil and extracts were evaluated against Staphylococcus aureus, Bacillus cereus, Pseudomonas aeruginosa, Salmonella typhimurium, Aspergillus flavus and Candida albicans. The essential oil yield of was 0.35%, the lowest and highest extract yield was related to ethyl acetate and water extract. The most component of essential oil was α-Pinene (46.35%). The methanol extracts had the highest phenolic acid (95.65 ± 4.72 µg galic acid equivalent/g dry plant) and flavonoids (25.39 ± 2.73 µg quercetin equivalent/g dry plant). The percentage of DPPH radical inhibition showed positive correlation with concentrations of essential oil or extract. The methanol and ethanol extract had the highest DDPH radical inhibitory. Essential oil and extracts of Hypericum had antimicrobial activity against the microorganisms studied in this research. The MIC and MBC values for essential oils were in the range of 25-25.6 and 25-50 μg/mL, respectively. For the extracts, these values were 1.5625-100 and 3.125-100 μg/mL, respectively. Methanol extracts had the highest antimicrobial activity. Essential oil and extract of Hypericum scabrum, especially methanol extract, have proper antimicrobial and antioxidant activity, and it can be used to control the oxidation and inhibit the growth of pathogenic and spoilage microorganisms. In addition, it can be used as a substitute for synthetic antioxidant and antimicrobial compounds.

Keywords: Antimicrobial, antioxidant, extract, hypericum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1268
910 A Comparative Analysis of Heuristics Applied to Collecting Used Lubricant Oils Generated in the City of Pereira, Colombia

Authors: Diana Fajardo, Sebastián Ortiz, Oscar Herrera, Angélica Santis

Abstract:

Currently, in Colombia is arising a problem related to collecting used lubricant oils which are generated by the increment of the vehicle fleet. This situation does not allow a proper disposal of this type of waste, which in turn results in a negative impact on the environment. Therefore, through the comparative analysis of various heuristics, the best solution to the VRP (Vehicle Routing Problem) was selected by comparing costs and times for the collection of used lubricant oils in the city of Pereira, Colombia; since there is no presence of management companies engaged in the direct administration of the collection of this pollutant. To achieve this aim, six proposals of through methods of solution of two phases were discussed. First, the assignment of the group of generator points of the residue was made (previously identified). Proposals one and four of through methods are based on the closeness of points. The proposals two and five are using the scanning method and the proposals three and six are considering the restriction of the capacity of collection vehicle. Subsequently, the routes were developed - in the first three proposals by the Clarke and Wright's savings algorithm and in the following proposals by the Traveling Salesman optimization mathematical model. After applying techniques, a comparative analysis of the results was performed and it was determined which of the proposals presented the most optimal values in terms of the distance, cost and travel time.

Keywords: Heuristics, optimization model, savings algorithm used vehicular oil, VRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1296
909 An Efficient Backward Semi-Lagrangian Scheme for Nonlinear Advection-Diffusion Equation

Authors: Soyoon Bak, Sunyoung Bu, Philsu Kim

Abstract:

In this paper, a backward semi-Lagrangian scheme combined with the second-order backward difference formula is designed to calculate the numerical solutions of nonlinear advection-diffusion equations. The primary aims of this paper are to remove any iteration process and to get an efficient algorithm with the convergence order of accuracy 2 in time. In order to achieve these objects, we use the second-order central finite difference and the B-spline approximations of degree 2 and 3 in order to approximate the diffusion term and the spatial discretization, respectively. For the temporal discretization, the second order backward difference formula is applied. To calculate the numerical solution of the starting point of the characteristic curves, we use the error correction methodology developed by the authors recently. The proposed algorithm turns out to be completely iteration free, which resolves the main weakness of the conventional backward semi-Lagrangian method. Also, the adaptability of the proposed method is indicated by numerical simulations for Burgers’ equations. Throughout these numerical simulations, it is shown that the numerical results is in good agreement with the analytic solution and the present scheme offer better accuracy in comparison with other existing numerical schemes.

Keywords: Semi-Lagrangian method, Iteration free method, Nonlinear advection-diffusion equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
908 Artificial Intelligence-Based Chest X-Ray Test of COVID-19 Patients

Authors: Dhurgham Al-Karawi, Nisreen Polus, Shakir Al-Zaidi, Sabah Jassim

Abstract:

The management of COVID-19 patients based on chest imaging is emerging as an essential tool for evaluating the spread of the pandemic which has gripped the global community. It has already been used to monitor the situation of COVID-19 patients who have issues in respiratory status. There has been increase to use chest imaging for medical triage of patients who are showing moderate-severe clinical COVID-19 features, this is due to the fast dispersal of the pandemic to all continents and communities. This article demonstrates the development of machine learning techniques for the test of COVID-19 patients using Chest X-Ray (CXR) images in nearly real-time, to distinguish the COVID-19 infection with a significantly high level of accuracy. The testing performance has covered a combination of different datasets of CXR images of positive COVID-19 patients, patients with viral and bacterial infections, also, people with a clear chest. The proposed AI scheme successfully distinguishes CXR scans of COVID-19 infected patients from CXR scans of viral and bacterial based pneumonia as well as normal cases with an average accuracy of 94.43%, sensitivity 95%, and specificity 93.86%. Predicted decisions would be supported by visual evidence to help clinicians speed up the initial assessment process of new suspected cases, especially in a resource-constrained environment.

Keywords: COVID-19, chest x-ray scan, artificial intelligence, texture analysis, local binary pattern transform, Gabor filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647
907 Study of Heat Transfer in the Poly Ethylene Fluidized Bed Reactor Numerically and Experimentally

Authors: Mahdi Hamzehei

Abstract:

In this research, heat transfer of a poly Ethylene fluidized bed reactor without reaction were studied experimentally and computationally at different superficial gas velocities. A multifluid Eulerian computational model incorporating the kinetic theory for solid particles was developed and used to simulate the heat conducting gas–solid flows in a fluidized bed configuration. Momentum exchange coefficients were evaluated using the Syamlal– O-Brien drag functions. Temperature distributions of different phases in the reactor were also computed. Good agreement was found between the model predictions and the experimentally obtained data for the bed expansion ratio as well as the qualitative gas–solid flow patterns. The simulation and experimental results showed that the gas temperature decreases as it moves upward in the reactor, while the solid particle temperature increases. Pressure drop and temperature distribution predicted by the simulations were in good agreement with the experimental measurements at superficial gas velocities higher than the minimum fluidization velocity. Also, the predicted time-average local voidage profiles were in reasonable agreement with the experimental results. The study showed that the computational model was capable of predicting the heat transfer and the hydrodynamic behavior of gas-solid fluidized bed flows with reasonable accuracy.

Keywords: Gas-solid flows, fluidized bed, Hydrodynamics, Heat transfer, Turbulence model, CFD

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937
906 Optical Limiting Characteristics of Core-Shell Nanoparticles

Authors: G.Vinitha, A.Ramalingam

Abstract:

TiO2 nanoparticles were synthesized by hydrothermal method at 180°C from TiOSO4 aqueous solution with1m/l concentration. The obtained products were coated with silica by means of a seeded polymerization technique for a coating time of 1440 minutes to obtain well defined TiO2@SiO2 core-shell structure. The uncoated and coated nanoparticles were characterized by using X-Ray diffraction technique (XRD), Fourier Transform Infrared Spectroscopy (FT-IR) to study their physico-chemical properties. Evidence from XRD and FTIR results show that SiO2 is homogenously coated on the surface of titania particles. FTIR spectra show that there exists an interaction between TiO2 and SiO2 and results in the formation of Ti-O-Si chemical bonds at the interface of TiO2 particles and SiO2 coating layer. The non linear optical limiting properties of TiO2 and TiO2@SiO2 nanoparticles dispersed in ethylene glycol were studied at 532nm using 5ns Nd:YAG laser pulses. Three-photon absorption is responsible for optical limiting characteristics in these nanoparticles and it is seen that the optical nonlinearity is enhanced in core-shell structures when compared with single counterparts. This effective three-photon type absorption at this wavelength, is of potential application in fabricating optical limiting devices.

Keywords: hydrothermal method, optical limiting devicesseeded polymerization technique, three-photon type absorption

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800
905 Idealization of Licca-chan and Barbie: Comparison of Two Dolls across the Pacific

Authors: Miho Tsukamoto

Abstract:

Since the initial creation of the Barbie doll in 1959, it became a symbol of US society. Likewise, the Licca-chan, a Japanese doll created in 1967, also became a Japanese symbolic doll of Japanese society. Prior to the introduction of Licca-chan, Barbie was already marketed in Japan but their sales were dismal. Licca-chan (an actual name: Kayama Licca) is a plastic doll with a variety of sizes ranging from 21.0 cm to 29.0 cm which many Japanese girls dream of having. For over 35 years, the manufacturer, Takara Co., Ltd. has sold over 48 million dolls and has produced doll houses, accessories, clothes, and Licca-chan video games for the Nintendo DS. Many First-generation Licca-chan consumers still are enamored with Licca-chan, and go to Licca-chan House, in an amusement park with their daughters. These people are called Licca-chan maniacs, as they enjoy touring the Licca-chan’s factory in Tohoku or purchase various Licca-chan accessories. After the successful launch of Licca-chan into the Japanese market, a mixed-like doll from the US and Japan, a doll, JeNny, was later sold in the same Japanese market by Takara Co., Ltd. in 1982. Comparison of these cultural iconic dolls, Barbie and Licca-chan, are analyzed in this paper. In fact, these dolls have concepts of girls’ dreams. By using concepts of mythology of Jean Baudrillard, these dolls can be represented idealized images of figures in the products for consumers, but at the same time, consumers can see products with different perspectives, which can cause controversy.

Keywords: Barbie, Dolls, JeNny, Idealization, Licca-chan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3542
904 Solving Transient Conduction and Radiation Using Finite Volume Method

Authors: Ashok K. Satapathy, Prerana Nashine

Abstract:

Radiative heat transfer in participating medium was carried out using the finite volume method. The radiative transfer equations are formulated for absorbing and anisotropically scattering and emitting medium. The solution strategy is discussed and the conditions for computational stability are conferred. The equations have been solved for transient radiative medium and transient radiation incorporated with transient conduction. Results have been obtained for irradiation and corresponding heat fluxes for both the cases. The solutions can be used to conclude incident energy and surface heat flux. Transient solutions were obtained for a slab of heat conducting in slab and by thermal radiation. The effect of heat conduction during the transient phase is to partially equalize the internal temperature distribution. The solution procedure provides accurate temperature distributions in these regions. A finite volume procedure with variable space and time increments is used to solve the transient radiation equation. The medium in the enclosure absorbs, emits, and anisotropically scatters radiative energy. The incident radiations and the radiative heat fluxes are presented in graphical forms. The phase function anisotropy plays a significant role in the radiation heat transfer when the boundary condition is non-symmetric.

Keywords: Participating media, finite volume method, radiation coupled with conduction, heat transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2932
903 A Novel VLSI Architecture of Hybrid Image Compression Model based on Reversible Blockade Transform

Authors: C. Hemasundara Rao, M. Madhavi Latha

Abstract:

Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. Furthermore, the discrete cosine transform has emerged as the new state-of-the art standard for image compression. In this paper, a hybrid image compression technique based on reversible blockade transform coding is proposed. The technique, implemented over regions of interest (ROIs), is based on selection of the coefficients that belong to different transforms, depending on the coefficients is proposed. This method allows: (1) codification of multiple kernals at various degrees of interest, (2) arbitrary shaped spectrum,and (3) flexible adjustment of the compression quality of the image and the background. No standard modification for JPEG2000 decoder was required. The method was applied over different types of images. Results show a better performance for the selected regions, when image coding methods were employed for the whole set of images. We believe that this method is an excellent tool for future image compression research, mainly on images where image coding can be of interest, such as the medical imaging modalities and several multimedia applications. Finally VLSI implementation of proposed method is shown. It is also shown that the kernal of Hartley and Cosine transform gives the better performance than any other model.

Keywords: VLSI, Discrete Cosine Transform, JPEG, Hartleytransform, Radon Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
902 Approach for Demonstrating Reliability Targets for Rail Transport during Low Mileage Accumulation in the Field: Methodology and Case Study

Authors: Nipun Manirajan, Heeralal Gargama, Sushil Guhe, Manoj Prabhakaran

Abstract:

In railway industry, train sets are designed based on contractual requirements (mission profile), where reliability targets are measured in terms of mean distance between failures (MDBF). However, during the beginning of revenue services, trains do not achieve the designed mission profile distance (mileage) within the timeframe due to infrastructure constraints, scarcity of commuters or other operational challenges thereby not respecting the original design inputs. Since trains do not run sufficiently and do not achieve the designed mileage within the specified time, car builder has a risk of not achieving the contractual MDBF target. This paper proposes a constant failure rate based model to deal with the situations where mileage accumulation is not a part of the design mission profile. The model provides appropriate MDBF target to be demonstrated based on actual accumulated mileage. A case study of rolling stock running in the field is undertaken to analyze the failure data and MDBF target demonstration during low mileage accumulation. The results of case study prove that with the proposed method, reliability targets are achieved under low mileage accumulation.

Keywords: Mean distance between failures, mileage based reliability, reliability target normalization, rolling stock reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1153
901 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Soo-Hyeon Jeon, Byeoung Kug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including large volumes of unstructured data and text have been created because of the rapid increase in the use of social media and the Internet. Usually, these documents are categorized for the convenience of users. Because the accuracy of manual categorization is not guaranteed, and such categorization requires a large amount of time and incurs huge costs. Many studies on automatic categorization have been conducted to help mitigate the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorize complex documents with multiple topics because they work on the assumption that individual documents can be categorized into single categories only. Therefore, to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, the learning process employed in these studies involves training using a multi-categorized document set. These methods therefore cannot be applied to the multi-categorization of most documents unless multi-categorized training sets using traditional multi-categorization algorithms are provided. To overcome this limitation, in this study, we review our novel methodology for extending the category of a single-categorized document to multiple categorizes, and then introduce a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: Big Data Analysis, Document Classification, Text Mining, Topic Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
900 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit have provided tactile information from the digitphantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: Cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147
899 The Motivating and Limiting Factors of Learners’ Engagement in an Online Discussion Forum

Authors: K. Durairaj, I. N. Umar

Abstract:

Lately, asynchronous discussion forum is integrated in higher educational institutions as it may increase learning process, learners’ understanding, achievement and knowledge construction. The asynchronous discussion forum is used to complement the traditional, face-to-face learning session in hybrid learning courses. However, studies have proven that students’ engagement in online forums is still unconvincing. Thus, the aim of this study is to investigate the motivating factors and obstacles that affect the learners’ engagement in asynchronous discussion forum. This study is carried out in one of the public higher educational institutions in Malaysia with 18 postgraduate students as samples. The authors have developed a 40-items questionnaire based on literature review. The results indicate several factors that have encouraged or limited students’ engagement in asynchronous discussion forum: (a) the practices or behaviors of peers, or instructors, (b) the needs for the discussions, (c) the learners’ personalities, (d) constraints in continuing the discussion forum, (e) lack of ideas, (f) the level of thoughts, (g) the level of knowledge construction, (h) technical problems, (i) time constraints and (j) misunderstanding. This study suggests some recommendations to increase the students’ engagement in online forums. Finally, based upon the findings, some implications are proposed for further research.

Keywords: Asynchronous Discussion Forum, Engagement, Factors, Motivating, Limiting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
898 Double Reduction of Ada-ECATNet Representation using Rewriting Logic

Authors: Noura Boudiaf, Allaoua Chaoui

Abstract:

One major difficulty that faces developers of concurrent and distributed software is analysis for concurrency based faults like deadlocks. Petri nets are used extensively in the verification of correctness of concurrent programs. ECATNets [2] are a category of algebraic Petri nets based on a sound combination of algebraic abstract types and high-level Petri nets. ECATNets have 'sound' and 'complete' semantics because of their integration in rewriting logic [12] and its programming language Maude [13]. Rewriting logic is considered as one of very powerful logics in terms of description, verification and programming of concurrent systems. We proposed in [4] a method for translating Ada-95 tasking programs to ECATNets formalism (Ada-ECATNet). In this paper, we show that ECATNets formalism provides a more compact translation for Ada programs compared to the other approaches based on simple Petri nets or Colored Petri nets (CPNs). Such translation doesn-t reduce only the size of program, but reduces also the number of program states. We show also, how this compact Ada-ECATNet may be reduced again by applying reduction rules on it. This double reduction of Ada-ECATNet permits a considerable minimization of the memory space and run time of corresponding Maude program.

Keywords: Ada tasking, ECATNets, Algebraic Petri Nets, Compact Representation, Analysis, Rewriting Logic, Maude.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
897 Mechanical Properties and Chloride Diffusion of Ceramic Waste Aggregate Mortar Containing Ground Granulated Blast–Furnace Slag

Authors: H. Higashiyama, M. Sappakittipakorn, M. Mizukoshi, O. Takahashi

Abstract:

Ceramic Waste Aggregates (CWAs) were made from electric porcelain insulator wastes supplied from an electric power company, which were crushed and ground to fine aggregate sizes. In this study, to develop the CWA mortar as an eco–efficient, ground granulated blast–furnace slag (GGBS) as a Supplementary Cementitious Material (SCM) was incorporated. The water–to–binder ratio (W/B) of the CWA mortars was varied at 0.4, 0.5, and 0.6. The cement of the CWA mortar was replaced by GGBS at 20 and 40% by volume (at about 18 and 37% by weight). Mechanical properties of compressive and splitting tensile strengths, and elastic modulus were evaluated at the age of 7, 28, and 91 days. Moreover, the chloride ingress test was carried out on the CWA mortars in a 5.0% NaCl solution for 48 weeks. The chloride diffusion was assessed by using an electron probe microanalysis (EPMA). To consider the relation of the apparent chloride diffusion coefficient and the pore size, the pore size distribution test was also performed using a mercury intrusion porosimetry at the same time with the EPMA. The compressive strength of the CWA mortars with the GGBS was higher than that without the GGBS at the age of 28 and 91 days. The resistance to the chloride ingress of the CWA mortar was effective in proportion to the GGBS replacement level.

Keywords: Ceramic waste aggregate, Chloride diffusion, GGBS, Pore size distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
896 Antioxidant Biosensor Using Microbe

Authors: Dyah Iswantini, Trivadila, Novik Nurhidayat, Waras Nurcholis

Abstract:

The antioxidant compounds are needed for the food, beverages, and pharmaceuticals industry. For this purpose, an appropriate method is required to measure the antioxidant properties in various types of samples. Spectrophotometric method usually used has some weaknesses, including the high price, long sample preparation time, and less sensitivity. Among the alternative methods developed to overcome these weaknesses is antioxidant biosensor based on superoxide dismutase (SOD) enzyme. Therefore, this study was carried out to measure the SOD activity originating from Deinococcus radiodurans and to determine its kinetics properties. Carbon paste electrode modified with ferrocene and immobilized SOD exhibited anode and cathode current peak at potential of +400 and +300mv respectively, in both pure SOD and SOD of D. radiodurans. This indicated that the current generated was from superoxide catalytic dismutation reaction by SOD. Optimum conditions for SOD activity was at pH 9 and temperature of 27.50C for D. radiodurans SOD, and pH 11 and temperature of 200C for pure SOD. Dismutation reaction kinetics of superoxide catalyzed by SOD followed the Lineweaver-Burk kinetics with D. radiodurans SOD KMapp value was smaller than pure SOD. The result showed that D. radiodurans SOD had higher enzyme-substrate affinity and specificity than pure SOD. It concluded that D. radiodurans SOD had a great potential as biological recognition component for antioxidant biosensor.

Keywords: Antioxidant biosensor, Deinococcus radiodurans, enzyme kinetic, superoxide dismutase (SOD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
895 Continuous Feature Adaptation for Non-Native Speech Recognition

Authors: Y. Deng, X. Li, C. Kwan, B. Raj, R. Stern

Abstract:

The current speech interfaces in many military applications may be adequate for native speakers. However, the recognition rate drops quite a lot for non-native speakers (people with foreign accents). This is mainly because the nonnative speakers have large temporal and intra-phoneme variations when they pronounce the same words. This problem is also complicated by the presence of large environmental noise such as tank noise, helicopter noise, etc. In this paper, we proposed a novel continuous acoustic feature adaptation algorithm for on-line accent and environmental adaptation. Implemented by incremental singular value decomposition (SVD), the algorithm captures local acoustic variation and runs in real-time. This feature-based adaptation method is then integrated with conventional model-based maximum likelihood linear regression (MLLR) algorithm. Extensive experiments have been performed on the NATO non-native speech corpus with baseline acoustic model trained on native American English. The proposed feature-based adaptation algorithm improved the average recognition accuracy by 15%, while the MLLR model based adaptation achieved 11% improvement. The corresponding word error rate (WER) reduction was 25.8% and 2.73%, as compared to that without adaptation. The combined adaptation achieved overall recognition accuracy improvement of 29.5%, and WER reduction of 31.8%, as compared to that without adaptation.

Keywords: speaker adaptation; environment adaptation; robust speech recognition; SVD; non-native speech recognition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3198
894 Prediction of Air-Water Two-Phase Frictional Pressure Drop Using Artificial Neural Network

Authors: H. B. Mehta, Vipul M. Patel, Jyotirmay Banerjee

Abstract:

The present paper discusses the prediction of gas-liquid two-phase frictional pressure drop in a 2.12 mm horizontal circular minichannel using Artificial Neural Network (ANN). The experimental results are obtained with air as gas phase and water as liquid phase. The superficial gas velocity is kept in the range of 0.0236 m/s to 0.4722 m/s while the values of 0.0944 m/s, 0.1416 m/s and 0.1889 m/s are considered for superficial liquid velocity. The experimental results are predicted using different Artificial Neural Network (ANN) models. Networks used for prediction are radial basis, generalised regression, linear layer, cascade forward back propagation, feed forward back propagation, feed forward distributed time delay, layer recurrent, and Elman back propagation. Transfer functions used for networks are Linear (PURELIN), Logistic sigmoid (LOGSIG), tangent sigmoid (TANSIG) and Gaussian RBF. Combination of networks and transfer functions give different possible neural network models. These models are compared for Mean Absolute Relative Deviation (MARD) and Mean Relative Deviation (MRD) to identify the best predictive model of ANN.

Keywords: Minichannel, Two-Phase Flow, Frictional Pressure Drop, ANN, MARD, MRD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
893 Thrust Enhancement on a Two Dimensional Elliptic Airfoil in a Forward Flight

Authors: S. M. Dash, K. B. Lua, T. T. Lim

Abstract:

This paper presents results of numerical and experimental studies on a two-dimensional (2D) flapping elliptic airfoil in a forward flight condition at Reynolds number of 5000. The study is motivated from an earlier investigation which shows that the deterioration in thrust performance of a sinusoidal heaving and pitching 2D (NACA0012) airfoil at high flapping frequency can be recovered by changing the effective angle of attack profile to square wave, sawtooth, or cosine wave shape. To better understand why such modifications lead to superior thrust performance, we take a closer look at the transient aerodynamic force behavior of an airfoil when the effective angle of attack profile changes gradually from a generic smooth trapezoidal profile to a sinusoid shape by modifying the base length of the trapezoid. The choice of using a smooth trapezoidal profile is to avoid the infinite acceleration condition encountered in the square wave profile. Our results show that the enhancement in the time-averaged thrust performance at high flapping frequency can be attributed to the delay and reduction in the drag producing valley region in the transient thrust force coefficient when the effective angle of attack profile changes from sinusoidal to trapezoidal.  

Keywords: Two-dimensional Flapping Airfoil, Thrust Performance, Effective Angle of Attack, CFD and Experiments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
892 Application of Universal Distribution Factors for Real-Time Complex Power Flow Calculation

Authors: Abdullah M. Alodhaiani, Yasir A. Alturki, Mohamed A. Elkady

Abstract:

Complex power flow distribution factors, which relate line complex power flows to the bus injected complex powers, have been widely used in various power system planning and analysis studies. In particular, AC distribution factors have been used extensively in the recent power and energy pricing studies in free electricity market field. As was demonstrated in the existing literature, many of the electricity market related costing studies rely on the use of the distribution factors. These known distribution factors, whether the injection shift factors (ISF’s) or power transfer distribution factors (PTDF’s), are linear approximations of the first order sensitivities of the active power flows with respect to various variables. This paper presents a novel model for evaluating the universal distribution factors (UDF’s), which are appropriate for an extensive range of power systems analysis and free electricity market studies. These distribution factors are used for the calculations of lines complex power flows and its independent of bus power injections, they are compact matrix-form expressions with total flexibility in determining the position on the line at which line flows are measured. The proposed approach was tested on IEEE 9-Bus system. Numerical results demonstrate that the proposed approach is very accurate compared with exact method.

Keywords: Distribution Factors, Power System, Sensitivity Factors, Electricity Market.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2609
891 The Calculation of Electromagnetic Fields (EMF) in Substations of Shopping Centers

Authors: Adnan Muharemovic, Hidajet Salkic, Mario Klaric, Irfan Turkovic, Aida Muharemovic

Abstract:

In nature, electromagnetic fields always appear like atmosphere static electric field, the earth's static magnetic field and the wide-rang frequency electromagnetic field caused by lightening. However, besides natural electromagnetic fields (EMF), today human beings are mostly exposed to artificial electromagnetic fields due to technology progress and outspread use of electrical devices. To evaluate nuisance of EMF, it is necessary to know field intensity for every frequency which appears and compare it with allowed values. Low frequency EMF-s around transmission and distribution lines are time-varying quasi-static electromagnetic fields which have conservative component of low frequency electrical field caused by charges and eddy component of low frequency magnetic field caused by currents. Displacement current or field delay are negligible, so energy flow in quasi-static EMF involves diffusion, analog like heat transfer. Electrical and magnetic field can be analyzed separately. This paper analysis the numerical calculations in ELF-400 software of EMF in distribution substation in shopping center. Analyzing the results it is possible to specify locations exposed to the fields and give useful suggestion to eliminate electromagnetic effect or reduce it on acceptable level within the non-ionizing radiation norms and norms of protection from EMF.

Keywords: Electromagnetic Field, Density of Electromagnetic Flow, Place of Proffesional Exposure, Place of Increased Sensitivity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3836