Search results for: error analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28363

Search results for: error analysis

27343 The Internationalization of Capital Market Influencing Debt Sustainability's Impact on the Growth of the Nigerian Economy

Authors: Godwin Chigozie Okpara, Eugine Iheanacho

Abstract:

The paper set out to assess the sustainability of debt in the Nigerian economy. Precisely, it sought to determine the level of debt sustainability and its impact on the growth of the economy; whether internationalization of capital market has positively influenced debt sustainability’s impact on economic growth; and to ascertain the direction of causality between external debt sustainability and the growth of GDP. In the light of these objectives, ratio analysis was employed for the determination of debt sustainability. Our findings revealed that the periods 1986 – 1994 and 1999 – 2004 were periods of severe unsustainable borrowing. The unit root test showed that the variables of the growth model were integrated of order one, I(1) and the cointegration test provided evidence for long run stability. Considering the dawn of internationalization of capital market, the researcher employed the structural break approach using Chow Breakpoint test on the vector error correction model (VECM). The result of VECM showed that debt sustainability, measured by debt to GDP ratio exerts negative and significant impact on the growth of the economy while debt burden measured by debt-export ratio and debt service export ratio are negative though insignificant on the growth of GDP. The Cho test result indicated that internationalization of capital market has no significant effect on the debt overhang impact on the growth of the Economy. The granger causality test indicates a feedback effect from economic growth to debt sustainability growth indicators. On the bases of these findings, the researchers made some necessary recommendations which if followed religiously will go a long way to ameliorating debt burdens and engendering economic growth.

Keywords: debt sustainability, internalization, capital market, cointegration, chow test

Procedia PDF Downloads 413
27342 New Result for Optical OFDM in Code Division Multiple Access Systems Using Direct Detection

Authors: Cherifi Abdelhamid

Abstract:

In optical communication systems, OFDM has received increased attention as a means to overcome various limitations of optical transmission systems such as modal dispersion, relative intensity noise, chromatic dispersion, polarization mode dispersion and self-phase modulation. The multipath dispersion limits the maximum transmission data rates. In this paper we investigate OFDM system where multipath induced intersymbol interference (ISI) is reduced and we increase the number of users by combining OFDM system with OCDMA system using direct detection Incorporate OOC (orthogonal optical code) for minimize a bit error rate.

Keywords: OFDM, OCDMA, OOC (orthogonal optical code), (ISI), prim codes (Pc)

Procedia PDF Downloads 633
27341 Application of Wavelet Based Approximation for the Solution of Partial Integro-Differential Equation Arising from Viscoelasticity

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

This work contributes a numerical method based on Legendre wavelet approximation for the treatment of partial integro-differential equation (PIDE). Operational matrices of Legendre wavelets reduce the solution of PIDE into the system of algebraic equations. Some useful results concerning the computational order of convergence and error estimates associated to the suggested scheme are presented. Illustrative examples are provided to show the effectiveness and accuracy of proposed numerical method.

Keywords: legendre wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 420
27340 Impact of Hybrid Optical Amplifiers on 16 Channel Wavelength Division Multiplexed System

Authors: Inderpreet Kaur, Ravinder Pal Singh, Kamal Kant Sharma

Abstract:

This paper addresses the different configurations used of optical amplifiers with 16 channels in Wavelength Division Multiplexed system. The systems with 16 channels have been simulated for evaluation of various parameters; Bit Error Rate, Quality Factor, for threshold values for a range of wavelength from 1471 nm to 1611 nm. Comparison of various combination of configurations have been analyzed with EDFA and FRA but EDFA-FRA configuration performance has been found satisfactory in terms of performance indices and stable region. The paper also compared various parameters quantized with different configurations individually. It has been found that Q factor has high value with less value of BER and high resolution for EDFA-FRA configuration.

Keywords: EDFA, FRA, WDM, Q factor, BER

Procedia PDF Downloads 337
27339 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India

Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab

Abstract:

Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.

Keywords: climate change, Coastal Vulnerability Index, global warming, sea level rise

Procedia PDF Downloads 117
27338 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 59
27337 Numerical Simulation of Flow and Heat Transfer Characteristics with Various Working Conditions inside a Reactor of Wet Scrubber

Authors: Jonghyuk Yoon, Hyoungwoon Song, Youngbae Kim, Eunju Kim

Abstract:

Recently, with the rapid growth of semiconductor industry, lots of interests have been focused on after treatment system that remove the polluted gas produced from semiconductor manufacturing process, and a wet scrubber is the one of the widely used system. When it comes to mechanism of removing the gas, the polluted gas is removed firstly by chemical reaction in a reactor part. After that, the polluted gas stream is brought into contact with the scrubbing liquid, by spraying it with the liquid. Effective design of the reactor part inside the wet scrubber is highly important since removal performance of the polluted gas in the reactor plays an important role in overall performance and stability. In the present study, a CFD (Computational Fluid Dynamics) analysis was performed to figure out the thermal and flow characteristics inside unit a reactor of wet scrubber. In order to verify the numerical result, temperature distribution of the numerical result at various monitoring points was compared to the experimental result. The average error rates (12~15%) between them was shown and the numerical result of temperature distribution was in good agreement with the experimental data. By using validated numerical method, the effect of the reactor geometry on heat transfer rate was also taken into consideration. Uniformity of temperature distribution was improved about 15%. Overall, the result of present study could be useful information to identify the fluid behavior and thermal performance for various scrubber systems. This project is supported by the ‘R&D Center for the reduction of Non-CO₂ Greenhouse gases (RE201706054)’ funded by the Korea Ministry of Environment (MOE) as the Global Top Environment R&D Program.

Keywords: semiconductor, polluted gas, CFD (Computational Fluid Dynamics), wet scrubber, reactor

Procedia PDF Downloads 123
27336 The Use of AI to Measure Gross National Happiness

Authors: Riona Dighe

Abstract:

This research attempts to identify an alternative approach to the measurement of Gross National Happiness (GNH). It uses artificial intelligence (AI), incorporating natural language processing (NLP) and sentiment analysis to measure GNH. We use ‘off the shelf’ NLP models responsible for the sentiment analysis of a sentence as a building block for this research. We constructed an algorithm using NLP models to derive a sentiment analysis score against sentences. This was then tested against a sample of 20 respondents to derive a sentiment analysis score. The scores generated resembled human responses. By utilising the MLP classifier, decision tree, linear model, and K-nearest neighbors, we were able to obtain a test accuracy of 89.97%, 54.63%, 52.13%, and 47.9%, respectively. This gave us the confidence to use the NLP models against sentences in websites to measure the GNH of a country.

Keywords: artificial intelligence, NLP, sentiment analysis, gross national happiness

Procedia PDF Downloads 86
27335 What the Future Holds for Social Media Data Analysis

Authors: P. Wlodarczak, J. Soar, M. Ally

Abstract:

The dramatic rise in the use of Social Media (SM) platforms such as Facebook and Twitter provide access to an unprecedented amount of user data. Users may post reviews on products and services they bought, write about their interests, share ideas or give their opinions and views on political issues. There is a growing interest in the analysis of SM data from organisations for detecting new trends, obtaining user opinions on their products and services or finding out about their online reputations. A recent research trend in SM analysis is making predictions based on sentiment analysis of SM. Often indicators of historic SM data are represented as time series and correlated with a variety of real world phenomena like the outcome of elections, the development of financial indicators, box office revenue and disease outbreaks. This paper examines the current state of research in the area of SM mining and predictive analysis and gives an overview of the analysis methods using opinion mining and machine learning techniques.

Keywords: social media, text mining, knowledge discovery, predictive analysis, machine learning

Procedia PDF Downloads 409
27334 SNP g.1007A>G within the Porcine DNAL4 Gene Affects Sperm Motility Traits

Authors: I. Wiedemann, A. R. Sharifi, A. Mählmeyer, C. Knorr

Abstract:

A requirement for sperm motility is a morphologically intact flagellum with a central axoneme. The flagellar beating is caused by the varying activation and inactivation of dynein molecules which are located in the axoneme. DNAL4 (dynein, axonemal, light chain 4) is regarded as a possible functional candidate gene encoding a small subunit of the dyneins. In the present study, 5814bp of the porcine DNAL4 (GenBank Acc. No. AM284696.1, 6097 bp, 4 exons) were comparatively sequenced using three boars with a high motility (>68%) and three with a low motility (<60%). Primers were self-designed except for those covering exons 1, 2 and 3. Prior to sequencing, the PCR products were purified. Sequencing was performed with an ABI PRISM 3100 Genetic Analyzer using the BigDyeTM Terminator v3.1 Cycle Sequencing Reaction Kit. Finally, 23 SNPs were described and genotyped for 82 AI boars representing the breeds Piétrain, German Large White and German Landrace. The genotypes were used to assess possible associations with standard spermatological parameters (ejaculate volume, density, and sperm motility (undiluted (Motud), 24h (Mot1) and 48h (Mot2) after semen collection) that were regularly recorded on the AI station. The analysis included a total of 8,833 spermatological data sets which ranged from 2 to 295 sets per boar in five years. Only SNP g.1007A>G had a significant effect. Finally, the gene substitution effect using the following statistical model was calculated: Yijk= µ+αi+βj+αβij+b1Sijk+b2Aijk+b3T ijk + b4Vijk+b5(α*A)ijk +b6(β*A)ijk+b7(A*T)ijk+Uijk+eijk where Yijk is the semen characteristics, µ is the general mean, α is the main effect of breed, β is the main effect of season, S is the effect of SNP (g.1007A > G), A is the effect of age at semen collection, V is the effect of diluter, αβ, α*A, β*A, A*T are interactions between the fixed effects, b1-b7 are regression coefficients between y and the respective covariate, U is the random effect of repeated observation on animal and e is the random error. The results from the single marker regression analysis revealed highly significant effects (p < 0.0001) of SNP g.1007A > G on Mot1 resp. on Mot2, resulting in a marked reduction by 11.4% resp. 15.4%. Furthermore a loss of Motud by 4.6% was detected (p < 0.0178). Considering the SNP g.1007A > G as a main factor (dominant-recessive model), significant differences between genotypes AA and AG as well as AA and GG for Mot1 and Mot2 exist. For Motud there was a significant difference between AA and GG.

Keywords: association, DNAL4, porcine, sperm traits

Procedia PDF Downloads 433
27333 Mathematical Modeling of the Working Principle of Gravity Gradient Instrument

Authors: Danni Cong, Meiping Wu, Hua Mu, Xiaofeng He, Junxiang Lian, Juliang Cao, Shaokun Cai, Hao Qin

Abstract:

Gravity field is of great significance in geoscience, national economy and national security, and gravitational gradient measurement has been extensively studied due to its higher accuracy than gravity measurement. Gravity gradient sensor, being one of core devices of the gravity gradient instrument, plays a key role in measuring accuracy. Therefore, this paper starts from analyzing the working principle of the gravity gradient sensor by Newton’s law, and then considers the relative motion between inertial and non-inertial systems to build a relatively adequate mathematical model, laying a foundation for the measurement error calibration, measurement accuracy improvement.

Keywords: gravity gradient, gravity gradient sensor, accelerometer, single-axis rotation modulation

Procedia PDF Downloads 313
27332 A Social Cognitive Investigation in the Context of Vocational Training Performance of People with Disabilities

Authors: Majid A. AlSayari

Abstract:

The study reported here investigated social cognitive theory (SCT) in the context of Vocational Rehab (VR) for people with disabilities. The prime purpose was to increase knowledge of VR phenomena and make recommendations for improving VR services. The sample consisted of 242 persons with Spinal Cord Injuries (SCI) who completed questionnaires. A further 32 participants were Trainers. Analysis of questionnaire data was carried out using factor analysis, multiple regression analysis, and thematic analysis. The analysis suggested that, in motivational terms, and consistent with research carried out in other academic contexts, self-efficacy was the best predictor of VR performance. The author concludes that that VR self-efficacy predicted VR training performance.

Keywords: people with physical disabilities, social cognitive theory, self-efficacy, vocational training

Procedia PDF Downloads 289
27331 Characterization of Onboard Reliable Error Correction Code FORSDRAM Controller

Authors: N. Pitcheswara Rao

Abstract:

In the process of conveying the information there may be a chance of signal being corrupted which leads to the erroneous bits in the message. The message may consist of single, double and multiple bit errors. In high-reliability applications, memory can sustain multiple soft errors due to single or multiple event upsets caused by environmental factors. The traditional hamming code with SEC-DED capability cannot be address these types of errors. It is possible to use powerful non-binary BCH code such as Reed-Solomon code to address multiple errors. However, it could take at least a couple dozen cycles of latency to complete first correction and run at a relatively slow speed. In order to overcome this drawback i.e., to increase speed and latency we are using reed-Muller code.

Keywords: SEC-DED, BCH code, Reed-Solomon code, Reed-Muller code

Procedia PDF Downloads 409
27330 Accelerating Side Channel Analysis with Distributed and Parallelized Processing

Authors: Kyunghee Oh, Dooho Choi

Abstract:

Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.

Keywords: DPA, distributed computing, parallelized processing, side channel analysis

Procedia PDF Downloads 403
27329 Image Distortion Correction Method of 2-MHz Side Scan Sonar for Underwater Structure Inspection

Authors: Youngseok Kim, Chul Park, Jonghwa Yi, Sangsik Choi

Abstract:

The 2-MHz Side Scan SONAR (SSS) attached to the boat for inspection of underwater structures is affected by shaking. It is difficult to determine the exact scale of damage of structure. In this study, a motion sensor is attached to the inside of the 2-MHz SSS to get roll, pitch, and yaw direction data, and developed the image stabilization tool to correct the sonar image. We checked that reliable data can be obtained with an average error rate of 1.99% between the measured value and the actual distance through experiment. It is possible to get the accurate sonar data to inspect damage in underwater structure.

Keywords: image stabilization, motion sensor, safety inspection, sonar image, underwater structure

Procedia PDF Downloads 269
27328 Determination and Distribution of Formation Thickness Using Seismic and Well Data in Baga/Lake Sub-basin, Chad Basin Nigeria

Authors: Gabriel Efomeh Omolaiye, Olatunji Seminu, Jimoh Ajadi, Yusuf Ayoola Jimoh

Abstract:

The Nigerian part of the Chad Basin till date has been one of the few critically studied basins, with few published scholarly works, compared to other basins such as Niger Delta, Dahomey, etc. This work was undertaken by the integration of 3D seismic interpretations and the well data analysis of eight wells fairly distributed in block A, Baga/Lake sub-basin in Borno basin with the aim of determining the thickness of Chad, Kerri-Kerri, Fika, and Gongila Formations in the sub-basin. Da-1 well (type-well) used in this study was subdivided into stratigraphic units based on the regional stratigraphic subdivision of the Chad basin and was later correlated with other wells using similarity of observed log responses. The combined density and sonic logs were used to generate synthetic seismograms for seismic to well ties. Five horizons were mapped, representing the tops of the formations on the 3D seismic data covering the block; average velocity function with maximum error/residual of 0.48% was adopted in the time to depth conversion of all the generated maps. There is a general thickening of sediments from the west to the east, and the estimated thicknesses of the various formations in the Baga/Lake sub-basin are Chad Formation (400-750 m), Kerri-Kerri Formation (300-1200 m), Fika Formation (300-2200 m) and Gongila Formation (100-1300 m). The thickness of the Bima Formation could not be established because the deepest well (Da-1) terminates within the formation. This is a modification to the previous and widely referenced studies of over forty decades that based the estimation of formation thickness within the study area on the observed outcrops at different locations and the use of few well data.

Keywords: Baga/Lake sub-basin, Chad basin, formation thickness, seismic, velocity

Procedia PDF Downloads 158
27327 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 87
27326 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 128
27325 The Application of Data Mining Technology in Building Energy Consumption Data Analysis

Authors: Liang Zhao, Jili Zhang, Chongquan Zhong

Abstract:

Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.

Keywords: data mining, data analysis, prediction, optimization, building operational performance

Procedia PDF Downloads 834
27324 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 60
27323 An Abductive Approach to Policy Analysis: Policy Analysis as Informed Guessing

Authors: Adrian W. Chew

Abstract:

This paper argues that education policy analysis tends to be steered towards empiricist oriented approaches, which place emphasis on objective and measurable data. However, this paper argues that empiricist oriented approaches are generally based on inductive and/or deductive reasoning, which are unable to generate new ideas/knowledge. This paper will outline the logical structure of induction, deduction, and abduction, and argues that only abduction provides possibilities for the creation of new ideas/knowledge. This paper proposes the neologism of ‘informed guessing’ as a reformulation of abduction, and also as an approach to education policy analysis. On one side, the signifier ‘informed’ encapsulates the idea that abductive policy analysis needs to be informed by descriptive conceptualization theory to be able to make relations and connections between, and within, observed phenomenon and unobservable general structures. On the other side, the signifier ‘guessing’ captures the cyclical and unsystematic process of abduction. This paper will end with a brief example of utilising ‘informed guessing’ for a policy analysis of school choice lotteries in the United States.

Keywords: abductive reasoning, empiricism, informed guessing, policy analysis

Procedia PDF Downloads 332
27322 Validation of a Reloading Vehicle Design by Finite Element Analysis

Authors: Tuğrul Aksoy, Hüseyin Karabıyık

Abstract:

Reloading vehicles are the vehicles which are generally equipped with a crane and used to carry a stowage from a point and locate onto the vehicle or vice versa. In this study, structural analysis of a reloading vehicle was performed under the loads which are predicted to be exposed under operating conditions via the finite element method. Among the finite element analysis results, the stress and displacement distributions of the vehicle and the contact pressure distributions of the guide rings within the stabilization legs were examined. Vehicle design was improved by strengthening certain parts according to the analysis results. The analyses performed for the final design were verified by the experiments involving strain gauge measurements.

Keywords: structural analysis, reloading vehicle, crane, strain gauge

Procedia PDF Downloads 47
27321 Exergy Analysis and Evaluation of the Different Flowsheeting Configurations for CO₂ Capture Plant Using 2-Amino-2-Methyl-1-Propanol

Authors: Ebuwa Osagie, Vasilije Manovic

Abstract:

Exergy analysis provides the identification of the location, sources of thermodynamic inefficiencies, and magnitude in a thermal system. Thus, both the qualitative and quantitative assessment can be evaluated with exergy, unlike energy which is based on quantitative assessment only. The main purpose of exergy analysis is to identify where exergy is destroyed. Thus, reduction of the exergy destruction and losses associated with the capture plant systems can improve work potential. Furthermore, thermodynamic analysis of different configurations of the process helps to identify opportunities for reducing the steam requirements for each of the configurations. This paper presents steady-state simulation and exergy analysis of the 2-amino-2-methyl-1-propanol (AMP)-based post-combustion capture (PCC) plant. Exergy analysis performed for the AMP-based plant and the different configurations revealed that the rich split with intercooling configuration gave the highest exergy efficiency of 73.6%, while that of the intercooling and the reference AMP-based plant were 57.3% and 55.8% respectively.

Keywords: 2-amino-2-methyl-1-propanol, modelling, and simulation, post-combustion capture plant, exergy analysis, flowsheeting configurations

Procedia PDF Downloads 148
27320 Implementation of Achterbahn-128 for Images Encryption and Decryption

Authors: Aissa Belmeguenai, Khaled Mansouri

Abstract:

In this work, an efficient implementation of Achterbahn-128 for images encryption and decryption was introduced. The implementation for this simulated project is written by MATLAB.7.5. At first two different original images are used for validate the proposed design. Then our developed program was used to transform the original images data into image digits file. Finally, we used our implemented program to encrypt and decrypt images data. Several tests are done for proving the design performance including visual tests and security analysis; we discuss the security analysis of the proposed image encryption scheme including some important ones like key sensitivity analysis, key space analysis, and statistical attacks.

Keywords: Achterbahn-128, stream cipher, image encryption, security analysis

Procedia PDF Downloads 515
27319 Analysis of CO₂ Capture Products from Carbon Capture and Utilization Plant

Authors: Bongjae Lee, Beom Goo Hwang, Hye Mi Park

Abstract:

CO₂ capture products manufactured through Carbon Capture and Utilization (CCU) Plant that collect CO₂ directly from power plants require accurate measurements of the amount of CO₂ captured. For this purpose, two tests were carried out on the weight loss test. And one was analyzed using a carbon dioxide quantification device. First, the ignition loss analysis was performed by measuring the weight of the sample at 550°C after the first conversation and then confirming the loss when ignited at 950°C. Second, in the thermogravimetric analysis, the sample was divided into two sections of 40 to 500°C and 500 to 800°C to confirm the reduction. The results of thermal weight loss analysis and thermogravimetric analysis were confirmed to be almost similar. However, the temperature of the ignition loss analysis method was 950°C, which was 150°C higher than that of the thermogravimetric method at a temperature of 800°C, so that the difference in the amount of weight loss was 3 to 4% higher by the heat loss analysis method. In addition, the tendency that the CO₂ content increases as the reaction time become longer is similarly confirmed. Third, the results of the wet titration method through the carbon dioxide quantification device were found to be significantly lower than the weight loss method. Therefore, based on the results obtained through the above three analysis methods, we will establish a method to analyze the accurate amount of CO₂. Acknowledgements: This work was supported by the Korea Institute of Energy Technology Evaluation and planning (No. 20152010201850).

Keywords: carbon capture and utilization, CCU, CO2, CO2 capture products, analysis method

Procedia PDF Downloads 201
27318 A Comparative Analysis of Zotero and Mendeley Reference Management Software

Authors: Sujit K. Basak

Abstract:

This paper presents a comparison of the reference management software between Zotero and Mendeley and the results were drawn by comparing the two software’s. The novelty of this paper is the comparative analysis of the software and it has shown that Mendeley can import more information from the Google Scholar for the researchers. This finding can help to know researchers to use the reference management software.

Keywords: analysis, comparative analysis, zotero, researchers, Mendeley

Procedia PDF Downloads 596
27317 Characterization of Onboard Reliable Error Correction Code for SDRAM Controller

Authors: Pitcheswara Rao Nelapati

Abstract:

In the process of conveying the information there may be a chance of signal being corrupted which leads to the erroneous bits in the message. The message may consist of single, double and multiple bit errors. In high-reliability applications, memory can sustain multiple soft errors due to single or multiple event upsets caused by environmental factors. The traditional hamming code with SEC-DED capability cannot be address these types of errors. It is possible to use powerful non-binary BCH code such as Reed-Solomon code to address multiple errors. However, it could take at least a couple dozen cycles of latency to complete first correction and run at a relatively slow speed. In order to overcome this drawback i.e., to increase speed and latency we are using reed-Muller code.

Keywords: SEC-DED, BCH code, Reed-Solomon code, Reed-Muller code

Procedia PDF Downloads 409
27316 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 125
27315 Adaptive Transmission Scheme Based on Channel State in Dual-Hop System

Authors: Seung-Jun Yu, Yong-Jun Kim, Jung-In Baik, Hyoung-Kyu Song

Abstract:

In this paper, a dual-hop relay based on channel state is studied. In the conventional relay scheme, a relay uses the same modulation method without reference to channel state. But, a relay uses an adaptive modulation method with reference to channel state. If the channel state is poor, a relay eliminates latter 2 bits and uses Quadrature Phase Shift Keying (QPSK) modulation. If channel state is good, a relay modulates the received symbols with 16-QAM symbols by using 4 bits. The performance of the proposed scheme for Symbol Error Rate (SER) and throughput is analyzed.

Keywords: adaptive transmission, channel state, dual-hop, hierarchical modulation, relay

Procedia PDF Downloads 356
27314 Conflicts Identification Approach among Stakeholders in Goal-Oriented Requirements Analysis

Authors: Muhammad Suhaib

Abstract:

Requirements Analysis are the most important part of software Engineering for both system application development, and project requirements. Conflicts often arise during the requirements gathering and analysis phase. This research aims to identify conflicts during the requirements gathering phase in software development life cycle, Research, Development, and Technology converted the world into a global village. During requirements elicitation/gathering phase it’s very difficult to understand the main objective of stakeholders, after completion of requirements elicitation task final results are used for Software Requirements Specification (SRS), SRS is the highly important outcome of the requirements analysis phase. this is the foundation between the developers and stakeholders or customers, proposed methodology will be helpful to identify those conflicts in a very easy manner during the initial phase of the project.

Keywords: goal oriented requirements analysis, conflicts identification model, requirements analysis, requirements engineering

Procedia PDF Downloads 120