Search results for: scaling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 120

Search results for: scaling

30 A Finite Precision Block Floating Point Treatment to Direct Form, Cascaded and Parallel FIR Digital Filters

Authors: Abhijit Mitra

Abstract:

This paper proposes an efficient finite precision block floating point (BFP) treatment to the fixed coefficient finite impulse response (FIR) digital filter. The treatment includes effective implementation of all the three forms of the conventional FIR filters, namely, direct form, cascaded and par- allel, and a roundoff error analysis of them in the BFP format. An effective block formatting algorithm together with an adaptive scaling factor is pro- posed to make the realizations more simple from hardware view point. To this end, a generic relation between the tap weight vector length and the input block length is deduced. The implementation scheme also emphasises on a simple block exponent update technique to prevent overflow even during the block to block transition phase. The roundoff noise is also investigated along the analogous lines, taking into consideration these implementational issues. The simulation results show that the BFP roundoff errors depend on the sig- nal level almost in the same way as floating point roundoff noise, resulting in approximately constant signal to noise ratio over a relatively large dynamic range.

Keywords: Finite impulse response digital filters, Cascade structure, Parallel structure, Block floating point arithmetic, Roundoff error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
29 Qualitative Data Analysis for Health Care Services

Authors: Taner Ersoz, Filiz Ersoz

Abstract:

This study was designed enable application of multivariate technique in the interpretation of categorical data for measuring health care services satisfaction in Turkey. The data was collected from a total of 17726 respondents. The establishment of the sample group and collection of the data were carried out by a joint team from The Ministry of Health and Turkish Statistical Institute (Turk Stat) of Turkey. The multiple correspondence analysis (MCA) was used on the data of 2882 respondents who answered the questionnaire in full. The multiple correspondence analysis indicated that, in the evaluation of health services females, public employees, younger and more highly educated individuals were more concerned and complainant than males, private sector employees, older and less educated individuals. Overall 53 % of the respondents were pleased with the improvements in health care services in the past three years. This study demonstrates the public consciousness in health services and health care satisfaction in Turkey. It was found that most the respondents were pleased with the improvements in health care services over the past three years. Awareness of health service quality increases with education levels. Older individuals and males would appear to have lower expectancies in health services.

Keywords: Multiple correspondence analysis, optimal scaling, multivariate categorical data, health care services, health satisfaction survey, statistical visualizing, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 876
28 Material and Parameter Analysis of the PolyJet Process for Mold Making Using Design of Experiments

Authors: A. Kampker, K. Kreisköther, C. Reinders

Abstract:

Since additive manufacturing technologies constantly advance, the use of this technology in mold making seems reasonable. Many manufacturers of additive manufacturing machines, however, do not offer any suggestions on how to parameterize the machine to achieve optimal results for mold making. The purpose of this research is to determine the interdependencies of different materials and parameters within the PolyJet process by using design of experiments (DoE), to additively manufacture molds, e.g. for thermoforming and injection molding applications. Therefore, the general requirements of thermoforming molds, such as heat resistance, surface quality and hardness, have been identified. Then, different materials and parameters of the PolyJet process, such as the orientation of the printed part, the layer thickness, the printing mode (matte or glossy), the distance between printed parts and the scaling of parts, have been examined. The multifactorial analysis covers the following properties of the printed samples: Tensile strength, tensile modulus, bending strength, elongation at break, surface quality, heat deflection temperature and surface hardness. The key objective of this research is that by joining the results from the DoE with the requirements of the mold making, optimal and tailored molds can be additively manufactured with the PolyJet process. These additively manufactured molds can then be used in prototyping processes, in process testing and in small to medium batch production.

Keywords: Additive manufacturing, design of experiments, mold making, PolyJet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
27 Effect of Reynolds Number on Wall-normal Turbulence Intensity in a Smooth and Rough Open Channel Using both Outer and Inner Scaling

Authors: Md Abdullah Al Faruque, Ram Balachandar

Abstract:

Sudden change of bed condition is frequent in open channel flow. Change of bed condition affects the turbulence characteristics in both streamwise and wall-normal direction. Understanding the turbulence intensity in open channel flow is of vital importance to the modeling of sediment transport and resuspension, bed formation, entrainment, and the exchange of energy and momentum. A comprehensive study was carried out to understand the extent of the effect of Reynolds number and bed roughness on different turbulence characteristics in an open channel flow. Four different bed conditions (impervious smooth bed, impervious continuous rough bed, pervious rough sand bed, and impervious distributed roughness) and two different Reynolds numbers were adopted for this cause. The effect of bed roughness on different turbulence characteristics is seen to be prevalent for most of the flow depth. Effect of Reynolds number on different turbulence characteristics is also evident for flow over different bed, but the extent varies on bed condition. Although the same sand grain is used to create the different rough bed conditions, the difference in turbulence characteristics is an indication that specific geometry of the roughness has an influence on turbulence characteristics. Roughness increases the contribution of the extreme turbulent events which produces very large instantaneous Reynolds shear stress and can potentially influence the sediment transport, resuspension of pollutant from bed and alter the nutrient composition, which eventually affect the sustainability of benthic organisms.

Keywords: Open channel flow, Reynolds Number, roughness, turbulence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
26 Probabilistic Wavelet Neural Network Based Vibration Analysis of Induction Motor Drive

Authors: K. Jayakumar, S. Thangavel

Abstract:

In this paper proposed the effective fault detection of industrial drives by using Biorthogonal Posterior Vibration Signal-Data Probabilistic Wavelet Neural Network (BPPVS-WNN) system. This system was focused to reducing the current flow and to identify faults with lesser execution time with harmonic values obtained through fifth derivative. Initially, the construction of Biorthogonal vibration signal-data based wavelet transform in BPPVS-WNN system localizes the time and frequency domain. The Biorthogonal wavelet approximates the broken bearing using double scaling and factor, identifies the transient disturbance due to fault on induction motor through approximate coefficients and detailed coefficient. Posterior Probabilistic Neural Network detects the final level of faults using the detailed coefficient till fifth derivative and the results obtained through it at a faster rate at constant frequency signal on the industrial drive. Experiment through the Simulink tool detects the healthy and unhealthy motor on measuring parametric factors such as fault detection rate based on time, current flow rate, and execution time.

Keywords: Biorthogonal Wavelet Transform, Posterior Probabilistic Neural Network, Induction Motor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
25 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
24 A Refined Nonlocal Strain Gradient Theory for Assessing Scaling-Dependent Vibration Behavior of Microbeams

Authors: Xiaobai Li, Li Li, Yujin Hu, Weiming Deng, Zhe Ding

Abstract:

A size-dependent Euler–Bernoulli beam model, which accounts for nonlocal stress field, strain gradient field and higher order inertia force field, is derived based on the nonlocal strain gradient theory considering velocity gradient effect. The governing equations and boundary conditions are derived both in dimensional and dimensionless form by employed the Hamilton principle. The analytical solutions based on different continuum theories are compared. The effect of higher order inertia terms is extremely significant in high frequency range. It is found that there exists an asymptotic frequency for the proposed beam model, while for the nonlocal strain gradient theory the solutions diverge. The effect of strain gradient field in thickness direction is significant in low frequencies domain and it cannot be neglected when the material strain length scale parameter is considerable with beam thickness. The influence of each of three size effect parameters on the natural frequencies are investigated. The natural frequencies increase with the increasing material strain gradient length scale parameter or decreasing velocity gradient length scale parameter and nonlocal parameter.

Keywords: Euler-Bernoulli Beams, free vibration, higher order inertia, nonlocal strain gradient theory, velocity gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1005
23 Robust Face Recognition using AAM and Gabor Features

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seoungseon Jeon, Jaemin Kim, Seongwon Cho

Abstract:

In this paper, we propose a face recognition algorithm using AAM and Gabor features. Gabor feature vectors which are well known to be robust with respect to small variations of shape, scaling, rotation, distortion, illumination and poses in images are popularly employed for feature vectors for many object detection and recognition algorithms. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization method employed in EBGM is based on Gabor jet similarity and is sensitive to initial values. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we devise a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based facial feature localization method with initial points set by the rough facial feature points obtained from AAM, and propose a face recognition algorithm using the devised localization method for facial feature localization and Gabor feature vectors. It is observed through experiments that such a cascaded localization method based on both AAM and Gabor jet similarity is more robust than the localization method based on only Gabor jet similarity. Also, it is shown that the proposed face recognition algorithm using this devised localization method and Gabor feature vectors performs better than the conventional face recognition algorithm using Gabor jet similarity-based localization method and Gabor feature vectors like EBGM.

Keywords: Face Recognition, AAM, Gabor features, EBGM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206
22 Kinetic and Removable of Amoxicillin Using Aliquat336 as a Carrier via a HFSLM

Authors: Teerapon Pirom, Ura Pancharoen

Abstract:

Amoxicillin is an antibiotic which is widely used to treat various infections in both human beings and animals. However, when amoxicillin is released into the environment, it is a major problem. Amoxicillin causes bacterial resistance to these drugs and failure of treatment with antibiotics. Liquid membrane is of great interest as a promising method for the separation and recovery of the target ions from aqueous solutions due to the use of carriers for the transport mechanism, resulting in highly selectivity and rapid transportation of the desired metal ions. The simultaneous processes of extraction and stripping in a single unit operation of liquid membrane system are very interesting. Therefore, it is practical to apply liquid membrane, particularly the HFSLM for industrial applications as HFSLM is proved to be a separation process with lower capital and operating costs, low energy and extractant with long life time, high selectivity and high fluxes compared with solid membranes. It is a simple design amenable to scaling up for industrial applications. The extraction and recovery for (Amoxicillin) through the hollow fiber supported liquid membrane (HFSLM) using aliquat336 as a carrier were explored with the experimental data. The important variables affecting on transport of amoxicillin viz. extractant concentration and operating time were investigated. The highest AMOX- extraction percentages of 85.35 and Amoxicillin stripping of 80.04 were achieved with the best condition at 6 mmol/L [aliquat336] and operating time 100 min. The extraction reaction order (n) and the extraction reaction rate constant (kf) were found to be 1.00 and 0.0344 min-1, respectively.

Keywords: Aliquat336, amoxicillin, HFSLM, kinetic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
21 Associated Map and Inter-Purchase Time Model for Multiple-Category Products

Authors: Ching-I Chen

Abstract:

The continued rise of e-commerce is the main driver of the rapid growth of global online purchase. Consumers can nearly buy everything they want at one occasion through online shopping. The purchase behavior models which focus on single product category are insufficient to describe online shopping behavior. Therefore, analysis of multi-category purchase gets more and more popular. For example, market basket analysis explores customers’ buying tendency of the association between product categories. The information derived from market basket analysis facilitates to make cross-selling strategies and product recommendation system.

To detect the association between different product categories, we use the market basket analysis with the multidimensional scaling technique to build an associated map which describes how likely multiple product categories are bought at the same time. Besides, we also build an inter-purchase time model for associated products to describe how likely a product will be bought after its associated product is bought. We classify inter-purchase time behaviors of multi-category products into nine types, and use a mixture regression model to integrate those behaviors under our assumptions of purchase sequences. Our sample data is from comScore which provides a panelist-label database that captures detailed browsing and buying behavior of internet users across the United States. Finding the inter-purchase time from books to movie is shorter than the inter-purchase time from movies to books. According to the model analysis and empirical results, this research finally proposes the applications and recommendations in the management.

Keywords: Multiple-category purchase behavior, inter-purchase time, market basket analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
20 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency subband of the DWT of the suspicious image thereby leaving valuable information in the other three subbands, the proposed algorithm simultaneously extracts features from all the four subbands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates. 

Keywords: Affine Transformation, Discrete Wavelet Transform, Radix Sort, SATS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910
19 Rotation Invariant Fusion of Partial Image Parts in Vista Creation using Missing View Regeneration

Authors: H. B. Kekre, Sudeep D. Thepade

Abstract:

The automatic construction of large, high-resolution image vistas (mosaics) is an active area of research in the fields of photogrammetry [1,2], computer vision [1,4], medical image processing [4], computer graphics [3] and biometrics [8]. Image stitching is one of the possible options to get image mosaics. Vista Creation in image processing is used to construct an image with a large field of view than that could be obtained with a single photograph. It refers to transforming and stitching multiple images into a new aggregate image without any visible seam or distortion in the overlapping areas. Vista creation process aligns two partial images over each other and blends them together. Image mosaics allow one to compensate for differences in viewing geometry. Thus they can be used to simplify tasks by simulating the condition in which the scene is viewed from a fixed position with single camera. While obtaining partial images the geometric anomalies like rotation, scaling are bound to happen. To nullify effect of rotation of partial images on process of vista creation, we are proposing rotation invariant vista creation algorithm in this paper. Rotation of partial image parts in the proposed method of vista creation may introduce some missing region in the vista. To correct this error, that is to fill the missing region further we have used image inpainting method on the created vista. This missing view regeneration method also overcomes the problem of missing view [31] in vista due to cropping, irregular boundaries of partial image parts and errors in digitization [35]. The method of missing view regeneration generates the missing view of vista using the information present in vista itself.

Keywords: Vista, Overlap Estimation, Rotation Invariance, Missing View Regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
18 Gate Tunnel Current Calculation for NMOSFET Based on Deep Sub-Micron Effects

Authors: Ashwani K. Rana, Narottam Chand, Vinod Kapoor

Abstract:

Aggressive scaling of MOS devices requires use of ultra-thin gate oxides to maintain a reasonable short channel effect and to take the advantage of higher density, high speed, lower cost etc. Such thin oxides give rise to high electric fields, resulting in considerable gate tunneling current through gate oxide in nano regime. Consequently, accurate analysis of gate tunneling current is very important especially in context of low power application. In this paper, a simple and efficient analytical model has been developed for channel and source/drain overlap region gate tunneling current through ultra thin gate oxide n-channel MOSFET with inevitable deep submicron effect (DSME).The results obtained have been verified with simulated and reported experimental results for the purpose of validation. It is shown that the calculated tunnel current is well fitted to the measured one over the entire oxide thickness range. The proposed model is suitable enough to be used in circuit simulator due to its simplicity. It is observed that neglecting deep sub-micron effect may lead to large error in the calculated gate tunneling current. It is found that temperature has almost negligible effect on gate tunneling current. It is also reported that gate tunneling current reduces with the increase of gate oxide thickness. The impact of source/drain overlap length is also assessed on gate tunneling current.

Keywords: Gate tunneling current, analytical model, gate dielectrics, non uniform poly gate doping, MOSFET, fringing field effect and image charges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733
17 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871
16 Technology Identification, Evaluation and Selection Methodology for Industrial Process Water and Waste Water Treatment Plant of 3x150 MWe Tufanbeyli Lignite-Fired Power Plant

Authors: Cigdem Safak Saglam

Abstract:

Most thermal power plants use steam as working fluid in their power cycle. Therefore, in addition to fuel, water is the other main input for thermal plants. Water and steam must be highly pure in order to protect the systems from corrosion, scaling and biofouling. Pure process water is produced in water treatment plants having many several treatment methods. Treatment plant design is selected depending on raw water source and required water quality. Although working principle of fossil-fuel fired thermal power plants are same, there is no standard design and equipment arrangement valid for all thermal power plant utility systems. Besides that, there are many other technology evaluation and selection criteria for designing the most optimal water systems meeting the requirements such as local conditions, environmental restrictions, electricity and other consumables availability and transport, process water sources and scarcity, land use constraints etc. Aim of this study is explaining the adopted methodology for technology selection for process water preparation and industrial waste water treatment plant in a thermal power plant project located in Tufanbeyli, Adana Province in Turkey. Thermal power plant is fired with indigenous lignite coal extracted from adjacent lignite reserves. This paper addresses all above-mentioned factors affecting the thermal power plant water treatment facilities (demineralization + waste water treatment) design and describes the ultimate design of Tufanbeyli Thermal Power Plant Water Treatment Plant.

Keywords: Thermal power plant, lignite coal, pre-treatment, demineralization, electrodialysis, recycling, waste water, process water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
15 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: Discrete elements, Hertzian Contact, polydispersity, weakly nonlinear, wave propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922
14 Quality Classification and Monitoring Using Adaptive Metric Distance and Neural Networks: Application in Pickling Process

Authors: S. Bouhouche, M. Lahreche, S. Ziani, J. Bast

Abstract:

Modern manufacturing facilities are large scale, highly complex, and operate with large number of variables under closed loop control. Early and accurate fault detection and diagnosis for these plants can minimise down time, increase the safety of plant operations, and reduce manufacturing costs. Fault detection and isolation is more complex particularly in the case of the faulty analog control systems. Analog control systems are not equipped with monitoring function where the process parameters are continually visualised. In this situation, It is very difficult to find the relationship between the fault importance and its consequences on the product failure. We consider in this paper an approach to fault detection and analysis of its effect on the production quality using an adaptive centring and scaling in the pickling process in cold rolling. The fault appeared on one of the power unit driving a rotary machine, this machine can not track a reference speed given by another machine. The length of metal loop is then in continuous oscillation, this affects the product quality. Using a computerised data acquisition system, the main machine parameters have been monitored. The fault has been detected and isolated on basis of analysis of monitored data. Normal and faulty situation have been obtained by an artificial neural network (ANN) model which is implemented to simulate the normal and faulty status of rotary machine. Correlation between the product quality defined by an index and the residual is used to quality classification.

Keywords: Modeling, fault detection and diagnosis, parameters estimation, neural networks, Fault Detection and Diagnosis (FDD), pickling process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
13 Integrated Subset Split for Balancing Network Utilization and Quality of Routing

Authors: S. V. Kasmir Raja, P. Herbert Raj

Abstract:

The overlay approach has been widely used by many service providers for Traffic Engineering (TE) in large Internet backbones. In the overlay approach, logical connections are set up between edge nodes to form a full mesh virtual network on top of the physical topology. IP routing is then run over the virtual network. Traffic engineering objectives are achieved through carefully routing logical connections over the physical links. Although the overlay approach has been implemented in many operational networks, it has a number of well-known scaling issues. This paper proposes a new approach to achieve traffic engineering without full-mesh overlaying with the help of integrated approach and equal subset split method. Traffic engineering needs to determine the optimal routing of traffic over the existing network infrastructure by efficiently allocating resource in order to optimize traffic performance on an IP network. Even though constraint-based routing [1] of Multi-Protocol Label Switching (MPLS) is developed to address this need, since it is not widely tested or debugged, Internet Service Providers (ISPs) resort to TE methods under Open Shortest Path First (OSPF), which is the most commonly used intra-domain routing protocol. Determining OSPF link weights for optimal network performance is an NP-hard problem. As it is not possible to solve this problem, we present a subset split method to improve the efficiency and performance by minimizing the maximum link utilization in the network via a small number of link weight modifications. The results of this method are compared against results of MPLS architecture [9] and other heuristic methods.

Keywords: Constraint based routing, Link Utilization, Subsetsplit method and Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1396
12 The Integration of Cleaner Production Innovation and Creativity for Supply Chain Sustainability of Bogor Batik SMEs

Authors: Sawarni Hasibuan, Juliza Hidayati

Abstract:

Competitiveness and sustainability issues not only put pressure on big companies, but also small and medium enterprises (SMEs). SMEs Batik Bogor is one of the local culture-based creative industries in Bogor city which is also dealing with the issue of sustainability. The purpose of this research is to develop framework of sustainability at SMEs Batik Indonesia case of SMEs Batik Bogor by integrating innovation of cleaner production in its supply chain. The approach used is desk study, field survey, in-depth interviews, and benchmarking best practices of SMEs sustainability. In-depth interviews involve stakeholders to identify the needs and standards of sustainability of SMEs Batik. Data analysis was done by benchmarking method, Multi Dimension Scaling (MDS) method, and Strength, Weakness, Opportunity, Threat (SWOT) analysis. The results recommend the framework of sustainability for SMEs Batik in Indonesia. The sustainability status of SMEs Batik Bogor is classified as Moderate Sustainable. Factors that support the sustainability of SMEs Batik Bogor such is a strong commitment of top management in adopting cleaner production innovation and creativity approach. Successful cleaner production innovations are implemented primarily in the substitution of dye materials from toxic to non-toxic, reducing the intensity of non-renewable energy use, as well as the reuse and recycle of solid waste. “Mosaic Batik” is one of the innovations of solid waste utilization of batik waste produced by company R&D center that gives benefit to three pillars of sustainability, that is financial benefit, environmental benefit, and social benefit. The sustainability of SMEs Batik Bogor cannot be separated from the support of Bogor City Government which proactively facilitates the promotion of sustainable innovation produced by SMEs Batik Bogor.

Keywords: Cleaner production innovation, creativity, SMEs Batik, sustainability supply chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 876
11 On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation

Authors: Corneliu Sofronie, Roxana Zubcov

Abstract:

Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.

Keywords: complementary methodology, connection approach, networks without scaling, quantum psychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3670
10 Mapping the Digital Landscape: An Analysis of Party Differences between Conventional and Digital Policy Positions

Authors: Daniel Schwarz, Jan Fivaz, Alessia Neuroni

Abstract:

Although digitization is a buzzword in almost every election campaign, the political parties leave voters largely in the dark about their specific positions on digital issues. In the run-up to the 2019 elections in Switzerland, the ‘Digitization Monitor’ project (DMP) was launched in order to change this situation. Within the framework of the DMP, all 4,736 candidates were surveyed about their digital policy positions and values. The DMP is designed as a digital policy supplement to the existing ‘smartvote’ voting advice application. This enabled a direct comparison of the digital policy attitudes according to the DMP with the topics of the ‘smartvote’ questionnaire which are comprehensive in content but mainly related to conventional policy areas. This paper’s main research goal is to analyze and visualize possible differences between conventional and digital policy areas in terms of response patterns between and within political parties. The analysis is based on dimensionality reduction methods (multidimensional scaling and principal component analysis) for the visualization of inter-party differences, and on standard deviation as a measure of variation for the evaluation of intra-party unity. The results reveal that digital issues show a lower degree of inter-party polarization compared to conventional policy areas. Thus, the parties have more common ground in issues on digitization than in conventional policy areas. In contrast, the study reveals a mixed picture regarding intra-party unity. Homogeneous parties show a lower degree of unity in digitization issues whereas parties with heterogeneous positions in conventional areas have more united positions in digital areas. All things considered, the findings are encouraging as less polarized conditions apply to the debate on digital development compared to conventional politics. For the future, it would be desirable if in further countries similar projects to the DMP could emerge to broaden the basis for conclusions.

Keywords: Comparison of political issue dimensions, digital awareness of candidates, digital policy space, party positions on digital issues.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647
9 Simulation and Parameterization by the Finite Element Method of a C Shape Delectromagnet for Application in the Characterization of Magnetic Properties of Materials

Authors: A. A Velásquez, J.Baena

Abstract:

This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.

Keywords: Electromagnet, Finite Elements Method, Magnetostatic, Magnetometry, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
8 Effect of Prandtl Number on Natural Convection Heat Transfer from a Heated Semi-Circular Cylinder

Authors: Avinash Chandra, R. P. Chhabra

Abstract:

Natural convection heat transfer from a heated horizontal semi-circular cylinder (flat surface upward) has been investigated for the following ranges of conditions; Grashof number, and Prandtl number. The governing partial differential equations (continuity, Navier-Stokes and energy equations) have been solved numerically using a finite volume formulation. In addition, the role of the type of the thermal boundary condition imposed at cylinder surface, namely, constant wall temperature (CWT) and constant heat flux (CHF) are explored. Natural convection heat transfer from a heated horizontal semi-circular cylinder (flat surface upward) has been investigated for the following ranges of conditions; Grashof number, and Prandtl number, . The governing partial differential equations (continuity, Navier-Stokes and energy equations) have been solved numerically using a finite volume formulation. In addition, the role of the type of the thermal boundary condition imposed at cylinder surface, namely, constant wall temperature (CWT) and constant heat flux (CHF) are explored. The resulting flow and temperature fields are visualized in terms of the streamline and isotherm patterns in the proximity of the cylinder. The flow remains attached to the cylinder surface over the range of conditions spanned here except that for and ; at these conditions, a separated flow region is observed when the condition of the constant wall temperature is prescribed on the surface of the cylinder. The heat transfer characteristics are analyzed in terms of the local and average Nusselt numbers. The maximum value of the local Nusselt number always occurs at the corner points whereas it is found to be minimum at the rear stagnation point on the flat surface. Overall, the average Nusselt number increases with Grashof number and/ or Prandtl number in accordance with the scaling considerations. The numerical results are used to develop simple correlations as functions of Grashof and Prandtl number thereby enabling the interpolation of the present numerical results for the intermediate values of the Prandtl or Grashof numbers for both thermal boundary conditions.

Keywords: Constant heat flux, Constant surface temperature, Grashof number, natural convection, Prandtl number, Semi-circular cylinder

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3415
7 E-learning: An Effective Approach for Enhancing Social and Behavior Change Communication Capacity in Bangladesh

Authors: Mohammad K. Abedin, Mohammad Shahjahan, Zeenat Sultana, Tawfique Jahan, Jesmin Akter

Abstract:

To strengthen social and behavior change communication (SBCC) capacity of Ministry of Health and Family Welfare (MoHFW) of the Government of Bangladesh, BCCP/BKMI developed two eLearning courses providing opportunities for professional development of SBCC Program Managers who have no access to training or refreshers training. The two eLearning courses – Message and Material Development (MMD) and Monitoring and Evaluation (MandE) of SBCC programs – went online in September 2015, where all users could register their participation so results could be monitored. Methodology: To assess the uses of these courses a randomly selected sample was collected to run a pre and post-test analyses and a phone survey were conducted. Systematic random sampling was used to select a sample of 75 MandE and 25 MMD course participants from a sampling frame of 179 and 51 respectively. Results: As of September 2016, more than 179 learners have completed the MandE course, and 49 learners have completed the MMD course. The users of these courses are program managers, university faculty members, and students. Encouraging results were revealed from the analysis of pre and post-test scores and a phone survey three months after course completion. Test scores suggested a substantial increase in knowledge. The pre-test scores findings suggested that about 19% learners scored high on the MandE. The post-test scores finding indicated a high score (92%) of the sample across 4 modules of MandE. For MMD course in pre-test scoring, 30% of the learners scored high, and 100% scored high at the post-test. It was found that all the learners in the phone survey have discussed the courses. Most of the sharing occurred with colleagues and friends, usually through face to face (70%) interaction. The learners reported that they did recommend the two courses to concerned people. About 67% MandE and 76% MMD learners stated that the concepts that they had to learn during the course were put into practice in their work settings. The respondents for both MandE and MMD courses have provided a valuable set of suggestions that would further strengthen the courses. Conclusions: The study showed that the initiative offered ample opportunities to build capacity in various ways in which the eLearning courses were used. It also highlighted the importance of scaling up these efforts to further strengthen the outcomes.

Keywords: E-learning course, message and material development, monitoring and evaluation, social and behavior change communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 867
6 Long-Term Economic-Ecological Assessment of Optimal Local Heat-Generating Technologies for the German Unrefurbished Residential Building Stock on the Quarter Level

Authors: M. A. Spielmann, L. Schebek

Abstract:

In order to reach the long-term national climate goals of the German government for the building sector, substantial energetic measures have to be executed. Historically, those measures were primarily energetic efficiency measures at the buildings’ shells. Advanced technologies for the on-site generation of heat (or other types of energy) often are not feasible at this small spatial scale of a single building. Therefore, the present approach uses the spatially larger dimension of a quarter. The main focus of the present paper is the long-term economic-ecological assessment of available decentralized heat-generating (CHP power plants and electrical heat pumps) technologies at the quarter level for the German unrefurbished residential buildings. Three distinct terms have to be described methodologically: i) Quarter approach, ii) Economic assessment, iii) Ecological assessment. The quarter approach is used to enable synergies and scaling effects over a single-building. For the present study, generic quarters that are differentiated according to significant parameters concerning their heat demand are used. The core differentiation of those quarters is made by the construction time period of the buildings. The economic assessment as the second crucial parameter is executed with the following structure: Full costs are quantized for each technology combination and quarter. The investment costs are analyzed on an annual basis and are modeled with the acquisition of debt. Annuity loans are assumed. Consequently, for each generic quarter, an optimal technology combination for decentralized heat generation is provided in each year of the temporal boundaries (2016-2050). The ecological assessment elaborates for each technology combination and each quarter a Life Cycle assessment. The measured impact category hereby is GWP 100. The technology combinations for heat production can be therefore compared against each other concerning their long-term climatic impacts. Core results of the approach can be differentiated to an economic and ecological dimension. With an annual resolution, the investment and running costs of different energetic technology combinations are quantified. For each quarter an optimal technology combination for local heat supply and/or energetic refurbishment of the buildings within the quarter is provided. Coherently to the economic assessment, the climatic impacts of the technology combinations are quantized and compared against each other.

Keywords: Building sector, heat, LCA, quarter level, systemic approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 945
5 Performance Assessment of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser during ‘Hot Standby’ Operation

Authors: M. J. Baum, B. Gibbes, A. Grinham, S. Albert, D. Gale, P. Fisher

Abstract:

Alongside the rapid expansion of Seawater Reverse Osmosis technologies there is a concurrent increase in the production of hypersaline brine by-products. To minimize environmental impact, these by-products are commonly disposed into open-coastal environments via submerged diffuser systems as inclined dense jet outfalls. Despite the widespread implementation of this process, diffuser designs are typically based on small-scale laboratory experiments under idealistic quiescent conditions. Studies concerning diffuser performance in the field are limited. A set of experiments were conducted to assess the near field characteristics of brine disposal at the Gold Coast Desalination Plant offshore multiport diffuser. The aim of the field experiments was to determine the trajectory and dilution characteristics of the plume under various discharge configurations with production ranging 66 – 100% of plant operative capacity. The field monitoring system employed an unprecedented static array of temperature and electrical conductivity sensors in a three-dimensional grid surrounding a single diffuser port. Complimenting these measurements, Acoustic Doppler Current Profilers were also deployed to record current variability over the depth of the water column and wave characteristics. Recorded data suggested the open-coastal environment was highly active over the experimental duration with ambient velocities ranging 0.0 – 0.5 m∙s-1, with considerable variability over the depth of the water column observed. Variations in background electrical conductivity corresponding to salinity fluctuations of ± 1.7 g∙kg-1 were also observed. Increases in salinity were detected during plant operation and appeared to be most pronounced 10 – 30 m from the diffuser, consistent with trajectory predictions described by existing literature. Plume trajectories and respective dilutions extrapolated from salinity data are compared with empirical scaling arguments. Discharge properties were found to adequately correlate with modelling projections. Temporal and spatial variation of background processes and their subsequent influence upon discharge outcomes are discussed with a view to incorporating the influence of waves and ambient currents in the design of brine outfalls into the future.

Keywords: Brine disposal, desalination, field study, inclined dense jets, negatively buoyant discharge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1062
4 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method

Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh

Abstract:

In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.

Keywords: Discrete Element Method, fluid flow, parametric study, sand production/bonds failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792
3 Construction Port Requirements for Floating Offshore Wind Turbines

Authors: Alan Crowle, Philpp Thies

Abstract:

s the floating offshore wind turbine industry continues to develop and grow, the capabilities of established port facilities need to be assessed as to their ability to support the expanding construction and installation requirements. This paper assesses current infrastructure requirements and projected changes to port facilities that may be required to support the floating offshore wind industry. Understanding the infrastructure needs of the floating offshore renewable industry will help to identify the port-related requirements. Floating offshore wind turbines can be installed further out to sea and in deeper waters than traditional fixed offshore wind arrays, meaning it can take advantage of stronger winds. Separate ports are required for substructure construction, fit-out of the turbines, moorings, subsea cables and maintenance. Large areas are required for the laydown of mooring equipment, inter array cables, turbine blades and nacelles. The capabilities of established port facilities to support floating wind farms are assessed by evaluation of size of substructures, height of wind turbine with regards to the cranes for fitting of blades, distance to offshore site and offshore installation vessel characteristics. The paper will discuss the advantages and disadvantages of using large land based cranes, inshore floating crane vessels or offshore crane vessels at the fit-out port for the installation of the turbine. Water depths requirements for import of materials and export of the completed structures will be considered. There are additional costs associated with any emerging technology. However, part of the popularity of Floating Offshore Wind Turbines stems from the cost savings against permanent structures like fixed wind turbines. Floating Offshore Wind Turbine developers can benefit from lighter, more cost effective equipment which can be assembled in port and towed to site rather than relying on large, expensive installation vessels to transport and erect fixed bottom turbines. The ability to assemble Floating Offshore Wind Turbines equipment on shore means minimising highly weather dependent operations like offshore heavy lifts and assembly, saving time and costs and reducing safety risks for offshore workers. Maintenance might take place in safer onshore conditions for barges and semi submersibles. Offshore renewables, such as floating wind, can take advantage of this wealth of experience, while oil and gas operators can deploy this experience at the same time as entering the renewables space. The floating offshore wind industry is in the early stages of development and port facilities are required for substructure fabrication, turbine manufacture, turbine construction and maintenance support. The paper discusses the potential floating wind substructures as this provides a snapshot of the requirements at the present time, and potential technological developments required for commercial development. Scaling effects of demonstration-scale projects will be addressed; however the primary focus will be on commercial-scale (30+ units) device floating wind energy farms.

Keywords: Floating offshore wind turbine, port logistics, installation, construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 504
2 Innovation Ecosystems in the Construction Industry

Authors: Cansu Gülser, Tuğce Ercan

Abstract:

The construction sector is a key driver of the global economy, contributing significantly to growth and employment through a diverse array of sub-sectors. However, it faces challenges due to its project-based nature, which often hampers long-term collaboration and broader incentives beyond individual projects. These limitations are frequently discussed in scientific literature as obstacles to innovation and industry-wide change. Traditional practices and unwritten rules further hinder the adoption of new processes within the construction industry. The disadvantages of the construction industry’s project-based structure in fostering innovation and long-term relationships include limited continuity, fragmented collaborations, and a focus on short-term goals, which collectively hinder the development of sustained partnerships, inhibit the sharing of knowledge and best practices, and reduce incentives for investing in innovative processes and technologies. This structure typically emphasizes specific projects, which restricts broader collaborations and incentives that extend beyond individual projects, thus impeding innovation and change. The temporal complexities inherent in project-based sectors like construction make it difficult to address societal challenges through collaborative efforts. Traditional management approaches are inadequate for scaling up innovations and adapting to significant changes. For systemic transformation in the construction sector, there is a need for more collaborative relationships and activities beyond traditional supply chains. This study delves into the concept of an innovation ecosystem within the construction sector, highlighting various research findings. It aims to explore key questions about the components that enhance innovation capacity, the relationship between a robust innovation ecosystem and this capacity, and the reasons why innovation is less prevalent and implemented in this sector compared to others. Additionally, it examines the main factors hindering innovation within companies and identifies strategies to improve these efforts, particularly in developing countries. The innovation ecosystem in the construction sector generates various outputs through interactions between business resources and external components. These outputs include innovative value creation, sustainable practices, robust collaborations, knowledge sharing, competitiveness, and advanced project management, all of which contribute significantly to company market performance and competitive advantage. This article offers insights and strategic recommendations for industry professionals, policymakers, and researchers interested in developing and sustaining innovation ecosystems in the construction sector. Future research should focus on broader samples for generalization, comparative sector analysis, and application-focused studies addressing real industry challenges. Additionally, studying the long-term impacts of innovation ecosystems, integrating advanced technologies like AI and machine learning into project management, and developing future application strategies and policies are also important.

Keywords: Construction industry, innovation ecosystem, innovation ecosystem components, project management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93
1 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: Effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1015