Search results for: data interpolating empirical orthogonal function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29461

Search results for: data interpolating empirical orthogonal function

26431 Multiclass Support Vector Machines with Simultaneous Multi-Factors Optimization for Corporate Credit Ratings

Authors: Hyunchul Ahn, William X. S. Wong

Abstract:

Corporate credit rating prediction is one of the most important topics, which has been studied by researchers in the last decade. Over the last decade, researchers are pushing the limit to enhance the exactness of the corporate credit rating prediction model by applying several data-driven tools including statistical and artificial intelligence methods. Among them, multiclass support vector machine (MSVM) has been widely applied due to its good predictability. However, heuristics, for example, parameters of a kernel function, appropriate feature and instance subset, has become the main reason for the critics on MSVM, as they have dictate the MSVM architectural variables. This study presents a hybrid MSVM model that is intended to optimize all the parameter such as feature selection, instance selection, and kernel parameter. Our model adopts genetic algorithm (GA) to simultaneously optimize multiple heterogeneous design factors of MSVM.

Keywords: corporate credit rating prediction, Feature selection, genetic algorithms, instance selection, multiclass support vector machines

Procedia PDF Downloads 282
26430 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 118
26429 Study of the Energy Levels in the Structure of the Laser Diode GaInP

Authors: Abdelali Laid, Abid Hamza, Zeroukhi Houari, Sayah Naimi

Abstract:

This work relates to the study of the energy levels and the optimization of the Parameter intrinsic (a number of wells and their widths, width of barrier of potential, index of refraction etc.) and extrinsic (temperature, pressure) in the Structure laser diode containing the structure GaInP. The methods of calculation used; - method of the empirical pseudo potential to determine the electronic structures of bands, - graphic method for optimization. The found results are in concord with those of the experiment and the theory.

Keywords: semi-conductor, GaInP/AlGaInP, pseudopotential, energy, alliages

Procedia PDF Downloads 473
26428 Investigating Salience Theory’s Implications for Real-Life Decision Making: An Experimental Test for Whether the Allais Paradox Exists under Subjective Uncertainty

Authors: Christoph Ostermair

Abstract:

We deal with the effect of correlation between prospects on human decision making under uncertainty as proposed by the comparatively new and promising model of “salience theory of choice under risk”. In this regard, we show that the theory entails the prediction that the inconsistency of choices, known as the Allais paradox, should not be an issue in the context of “real-life decision making”, which typically corresponds to situations of subjective uncertainty. The Allais paradox, probably the best-known anomaly regarding expected utility theory, would then essentially have no practical relevance. If, however, empiricism contradicts this prediction, salience theory might suffer a serious setback. Explanations of the model for variable human choice behavior are mostly the result of a particular mechanism that does not come to play under perfect correlation. Hence, if it turns out that correlation between prospects – as typically found in real-world applications – does not influence human decision making in the expected way, this might to a large extent cost the theory its explanatory power. The empirical literature regarding the Allais paradox under subjective uncertainty is so far rather moderate. Beyond that, the results are hard to maintain as an argument, as the presentation formats commonly employed, supposably have generated so-called event-splitting effects, thereby distorting subjects’ choice behavior. In our own incentivized experimental study, we control for such effects by means of two different choice settings. We find significant event-splitting effects in both settings, thereby supporting the suspicion that the so far existing empirical results related to Allais paradoxes under subjective uncertainty may not be able to answer the question at hand. Nevertheless, we find that the basic tendency behind the Allais paradox, which is a particular switch of the preference relation due to a modified common consequence, shared by two prospects, is still existent both under an event-splitting and a coalesced presentation format. Yet, the modal choice pattern is in line with the prediction of salience theory. As a consequence, the effect of correlation, as proposed by the model, might - if anything - only weaken the systematic choice pattern behind the Allais paradox.

Keywords: Allais paradox, common consequence effect, models of decision making under risk and uncertainty, salience theory

Procedia PDF Downloads 180
26427 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 366
26426 Innovation Management in State-Owned-Enterprises in the Digital Transformation: An Empirical Case Study of Swiss Post

Authors: Jiayun Shen, Lorenz Wyss, Thierry Golliard, Matthias Finger

Abstract:

Innovation is widely recognized as the key for private enterprises to win the market competition. The state-owned-enterprises need to be innovative to compete in the market after the privatization as well. However, it is a lack of research to study how state-owned-enterprises manage innovation to create new products and services. Swiss Post, a Swiss state-owned-enterprises, has established a department to transform the corporate culture and foster innovation to achieve digital transformation. This paper describes the innovation management process at the Swiss Post and analyzes the impacts of the instruments, the organizational structure, and explores the barriers of innovation. This study used qualitative methods based on a review of the literature on innovation management and semi-structured interviews. Being established for over five years, the Swiss Post’s innovation management department has established a software-assisted modularized platform with systematic instruments to help the internal employees with the different innovation processes. It guides the innovators from idea creation to piloting in markets and supports with a separate financing source, with knowledge inputs and coaching, as well as with connections to external partners through the open innovation and venturing team. The platform also adapts to different business units within the corporate with a customized tailor for the various operational business units. The separate financing instruments enabled the creation and further development of new ideas; the coaching services contribute greatly to the transformation of teams’ innovation culture by providing new knowledge, thinking methods, and use cases for inspiration. It also facilitates organizational learning to help the whole corporate with the digital transformation. However, it is also confronted with a big challenge in twofold. Internally, the disruptive projects often hardly overcome the obstacles of long-established operational processes in the traditional business units; externally, the expectations of the public and restrictions from the federal government have become high hurdles for the company to stay and compete in the innovation track.

Keywords: empirical case study, innovation management, state-owned-enterprise, Swiss Post

Procedia PDF Downloads 107
26425 Development of a General Purpose Computer Programme Based on Differential Evolution Algorithm: An Application towards Predicting Elastic Properties of Pavement

Authors: Sai Sankalp Vemavarapu

Abstract:

This paper discusses the application of machine learning in the field of transportation engineering for predicting engineering properties of pavement more accurately and efficiently. Predicting the elastic properties aid us in assessing the current road conditions and taking appropriate measures to avoid any inconvenience to commuters. This improves the longevity and sustainability of the pavement layer while reducing its overall life-cycle cost. As an example, we have implemented differential evolution (DE) in the back-calculation of the elastic modulus of multi-layered pavement. The proposed DE global optimization back-calculation approach is integrated with a forward response model. This approach treats back-calculation as a global optimization problem where the cost function to be minimized is defined as the root mean square error in measured and computed deflections. The optimal solution which is elastic modulus, in this case, is searched for in the solution space by the DE algorithm. The best DE parameter combinations and the most optimum value is predicted so that the results are reproducible whenever the need arises. The algorithm’s performance in varied scenarios was analyzed by changing the input parameters. The prediction was well within the permissible error, establishing the supremacy of DE.

Keywords: cost function, differential evolution, falling weight deflectometer, genetic algorithm, global optimization, metaheuristic algorithm, multilayered pavement, pavement condition assessment, pavement layer moduli back calculation

Procedia PDF Downloads 152
26424 Urban Logistics Dynamics: A User-Centric Approach to Traffic Modelling and Kinetic Parameter Analysis

Authors: Emilienne Lardy, Eric Ballot, Mariam Lafkihi

Abstract:

Efficient urban logistics requires a comprehensive understanding of traffic dynamics, particularly as it pertains to kinetic parameters influencing energy consumption and trip duration estimations. While real-time traffic information is increasingly accessible, current high-precision forecasting services embedded in route planning often function as opaque 'black boxes' for users. These services, typically relying on AI-processed counting data, fall short in accommodating open design parameters essential for management studies, notably within Supply Chain Management. This work revisits the modelling of traffic conditions in the context of city logistics, emphasizing its significance from the user’s point of view, with two focuses. Firstly, the focus is not on the vehicle flow but on the vehicles themselves and the impact of the traffic conditions on their driving behaviour. This means opening the range of studied indicators beyond vehicle speed, to describe extensively the kinetic and dynamic aspects of the driving behaviour. To achieve this, we leverage the Art. Kinema parameters are designed to characterize driving cycles. Secondly, this study examines how the driving context (i.e., exogenous factors to the traffic flow) determines the mentioned driving behaviour. Specifically, we explore how accurately the kinetic behaviour of a vehicle can be predicted based on a limited set of exogenous factors, such as time, day, road type, orientation, slope, and weather conditions. To answer this question, statistical analysis was conducted on real-world driving data, which includes high-frequency measurements of vehicle speed. A Factor Analysis and a Generalized Linear Model have been established to link kinetic parameters with independent categorical contextual variables. The results include an assessment of the adjustment quality and the robustness of the models, as well as an overview of the model’s outputs.

Keywords: factor analysis, generalised linear model, real world driving data, traffic congestion, urban logistics, vehicle kinematics

Procedia PDF Downloads 53
26423 Bank Competition: On the Relationship with Revenue Diversification and Funding Strategy from Selected ASEAN Countries

Authors: Oktofa Y. Sudrajad, Didier V. Caillie

Abstract:

Association of Southeast Asian Countries Nations (ASEAN) is moving forward to the next level of regional integration by the initiation of ASEAN Economic Community (AEC) which is already started in 2015, 8 years after its declaration for the creation of AEC in 2007. This commitment imposes financial integration in the region is one of the main agenda which will be achieved until 2025. Therefore, the commitment to financial integration including banking integration will bring new landscape in the competition and business model in this region. This study investigates the effect of competition on bank business model using a sample of 324 banks from seven members of Association of Southeast Asian Nations (ASEAN) countries (Cambodia, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam). We use market power approach and Boone indicator as competition measures, while income diversification and bank funding strategies are employed as bank business model representation. Moreover, we also evaluate bank business model based by grouping the banks based on the main banking characteristics. We use unbalanced bank-specific annual panel data over the period of 2003 – 2015. Our empirical analysis shows that the banking industries in ASEAN countries adapt their business model by increasing non-interest income proportion due to the level of competition increase in the sector.

Keywords: bank business model, banking competition, Boone indicator, market power

Procedia PDF Downloads 213
26422 Examination of Predictive Factors of Depression among Asian American Adolescents: A Narrative Review

Authors: Annisa Siu, Ping Zou

Abstract:

Background: Existent literature addressing Asian American children and adolescents reveals that this population is experiencing rates of depression comparable to those of European American and other ethnic minority youths. Within the last decade, increased attention has been given to Asian American adolescent mental health. Methods: 44 articles were extracted from Pubmed, PsycINFO, EMBASE, and Proquest CINAHL. Data were subject to thematic analyses and categorized into factors under individual, familial, and community levels. Results: Of all the individual factors, age and gender were the most supported in their relationship with depressive symptoms. Likewise, living situations, parent-child relations, peer relations, and broader environmental factors were strongly evidenced. The remaining psychosocial factors faced contrary evidence or were insubstantially addressed in the empirical literature. Discussion: The identified psychosocial factors within this study offer a starting point for future research to examine what factors should be included in formal or informal methods of screening/consultations. Clinicians should aim to understand the cultural influences specific to Asian American adolescents, particularly the central role that family relations may have on their depressive symptoms. Conclusion: Low awareness of culturally linked expressions of psychological distress can lead to misdiagnosis or under-diagnosis of depression in Asian American youth. Further evidence is needed to clarify the relationship of psychosocial factors linked to Asian American adolescent depressive symptoms.

Keywords: adolescent, Asian American, depression, psychosocial factors

Procedia PDF Downloads 107
26421 Analysis and Prediction of Netflix Viewing History Using Netflixlatte as an Enriched Real Data Pool

Authors: Amir Mabhout, Toktam Ghafarian, Amirhossein Farzin, Zahra Makki, Sajjad Alizadeh, Amirhossein Ghavi

Abstract:

The high number of Netflix subscribers makes it attractive for data scientists to extract valuable knowledge from the viewers' behavioural analyses. This paper presents a set of statistical insights into viewers' viewing history. After that, a deep learning model is used to predict the future watching behaviour of the users based on previous watching history within the Netflixlatte data pool. Netflixlatte in an aggregated and anonymized data pool of 320 Netflix viewers with a length 250 000 data points recorded between 2008-2022. We observe insightful correlations between the distribution of viewing time and the COVID-19 pandemic outbreak. The presented deep learning model predicts future movie and TV series viewing habits with an average loss of 0.175.

Keywords: data analysis, deep learning, LSTM neural network, netflix

Procedia PDF Downloads 223
26420 Innovation and Economic Growth Model of East Asian Countries: The Adaptability of the Model in Ethiopia

Authors: Khalid Yousuf Ahmed

Abstract:

At the beginning of growth period, East Asian countries achieved impressive economic growth for the decades. They transformed from agricultural economy toward industrialization and contributed to dynamic structural transformation. The achievements were driven by government-led development policies that implemented effective innovation policy to boost technological capability of local firms. Recently, most Sub-Saharan African have been showing sustainable growth. Exceptionally, Ethiopia has been recording double-digit growth for a decade. Hence, Ethiopia has claimed to follow the footstep of East Asia development model. The study is going to examine whether Ethiopia can replicate innovation and economic growth model of East Asia by using Japan, Taiwan, South Korea and China as a case to illustrate their model of growth. This research will be based on empirical data gathering and extended theory of national innovation system and economic growth theory. Moreover, the methodology is based on Knowledge Assessment Methodology (KAM) and also employing cross-countries regression analysis. The results explained that there is a significant relationship between innovation indicators and economic growth in East Asian countries while the relationship is non-existing for Ethiopia except implementing similar policies and achieving similar growth trend. Therefore, Ethiopia needs to introduce inclusive policies that give priority to improving human capital and invest on the knowledge-based economy to replicate East Asian Model.

Keywords: economic growth, FDI, endogenous growth theory, East Asia model

Procedia PDF Downloads 257
26419 Analysis of User Data Usage Trends on Cellular and Wi-Fi Networks

Authors: Jayesh M. Patel, Bharat P. Modi

Abstract:

The availability of on mobile devices that can invoke the demonstrated that the total data demand from users is far higher than previously articulated by measurements based solely on a cellular-centric view of smart-phone usage. The ratio of Wi-Fi to cellular traffic varies significantly between countries, This paper is shown the compression between the cellular data usage and Wi-Fi data usage by the user. This strategy helps operators to understand the growing importance and application of yield management strategies designed to squeeze maximum returns from their investments into the networks and devices that enable the mobile data ecosystem. The transition from unlimited data plans towards tiered pricing and, in the future, towards more value-centric pricing offers significant revenue upside potential for mobile operators, but, without a complete insight into all aspects of smartphone customer behavior, operators will unlikely be able to capture the maximum return from this billion-dollar market opportunity.

Keywords: cellular, Wi-Fi, mobile, smart phone

Procedia PDF Downloads 351
26418 Understanding Beginning Writers' Narrative Writing with a Multidimensional Assessment Approach

Authors: Huijing Wen, Daibao Guo

Abstract:

Writing is thought to be the most complex facet of language arts. Assessing writing is difficult and subjective, and there are few scientifically validated assessments exist. Research has proposed evaluating writing using a multidimensional approach, including both qualitative and quantitative measures of handwriting, spelling and prose. Given that narrative writing has historically been a staple of literacy instruction in primary grades and is one of the three major genres Common Core State Standards required students to acquire starting in kindergarten, it is essential for teachers to understand how to measure beginning writers writing development and sources of writing difficulties through narrative writing. Guided by the theoretical models of early written expression and using empirical data, this study examines ways teachers can enact a comprehensive approach to understanding beginning writer’s narrative writing through three writing rubrics developed for a Curriculum-based Measurement (CBM). The goal is to help classroom teachers structure a framework for assessing early writing in primary classrooms. Participants in this study included 380 first-grade students from 50 classrooms in 13 schools in three school districts in a Mid-Atlantic state. Three writing tests were used to assess first graders’ writing skills in relation to both transcription (i.e., handwriting fluency and spelling tests) and translational skills (i.e., a narrative prompt). First graders were asked to respond to a narrative prompt in 20 minutes. Grounded in theoretical models of earlier expression and empirical evidence of key contributors to early writing, all written samples to the narrative prompt were coded three ways for different dimensions of writing: length, quality, and genre elements. To measure the quality of the narrative writing, a traditional holistic rating rubric was developed by the researchers based on the CCSS and the general traits of good writing. Students' genre knowledge was measured by using a separate analytic rubric for narrative writing. Findings showed that first-graders had emerging and limited transcriptional and translational skills with a nascent knowledge of genre conventions. The findings of the study provided support for the Not-So-Simple View of Writing in that fluent written expression, measured by length and other important linguistic resources measured by the overall quality and genre knowledge rubrics, are fundamental in early writing development. Our study echoed previous research findings on children's narrative development. The study has practical classroom application as it informs writing instruction and assessment. It offered practical guidelines for classroom instruction by providing teachers with a better understanding of first graders' narrative writing skills and knowledge of genre conventions. Understanding students’ narrative writing provides teachers with more insights into specific strategies students might use during writing and their understanding of good narrative writing. Additionally, it is important for teachers to differentiate writing instruction given the individual differences shown by our multiple writing measures. Overall, the study shed light on beginning writers’ narrative writing, indicating the complexity of early writing development.

Keywords: writing assessment, early writing, beginning writers, transcriptional skills, translational skills, primary grades, simple view of writing, writing rubrics, curriculum-based measurement

Procedia PDF Downloads 59
26417 A Design for Supply Chain Model by Integrated Evaluation of Design Value and Supply Chain Cost

Authors: Yuan-Jye Tseng, Jia-Shu Li

Abstract:

To design a product with the given product requirement and design objective, there can be alternative ways to propose the detailed design specifications of the product. In the design modeling stage, alternative design cases with detailed specifications can be modeled to fulfill the product requirement and design objective. Therefore, in the design evaluation stage, it is required to perform an evaluation of the alternative design cases for deciding the final design. The purpose of this research is to develop a product evaluation model for evaluating the alternative design cases by integrated evaluating the criteria of functional design, Kansei design, and design for supply chain. The criteria in the functional design group include primary function, expansion function, improved function, and new function. The criteria in the Kansei group include geometric shape, dimension, surface finish, and layout. The criteria in the design for supply chain group include material, manufacturing process, assembly, and supply chain operation. From the point of view of value and cost, the criteria in the functional design group and Kansei design group represent the design value of the product. The criteria in the design for supply chain group represent the supply chain and manufacturing cost of the product. It is required to evaluate the design value and the supply chain cost to determine the final design. For the purpose of evaluating the criteria in the three criteria groups, a fuzzy analytic network process (FANP) method is presented to evaluate a weighted index by calculating the total relational values among the three groups. A method using the technique for order preference by similarity to ideal solution (TOPSIS) is used to compare and rank the design alternative cases according to the weighted index using the total relational values of the criteria. The final decision of a design case can be determined by using the ordered ranking. For example, the design case with the top ranking can be selected as the final design case. Based on the criteria in the evaluation, the design objective can be achieved with a combined and weighted effect of the design value and manufacturing cost. An example product is demonstrated and illustrated in the presentation. It shows that the design evaluation model is useful for integrated evaluation of functional design, Kansei design, and design for supply chain to determine the best design case and achieve the design objective.

Keywords: design for supply chain, design evaluation, functional design, Kansei design, fuzzy analytic network process, technique for order preference by similarity to ideal solution

Procedia PDF Downloads 309
26416 Kinesio Taping in Treatment Patients with Intermittent Claudication

Authors: Izabela Zielinska

Abstract:

Kinesio Taping is classified as physiotherapy method supporting rehabilitation and modulating some physiological processes. It is commonly used in sports medicine and orthopedics. This sensory method has influence on muscle function, pain sensation, intensifies lymphatic system as well as improves microcirculation. The aim of this study was to assess the effect of Kinesio Taping in patients with ongoing treatment of peripheral artery disease (PAD). The study group comprised 60 patients (stadium II B at Fontain's scale). All patients were divided into two groups (30 person/each), where 12 weeks long treadmill training was administrated. In the second group, the Kinesio Taping was applied to support the function of the gastrocnemius muscle. The measurements of distance and time until claudication pain, blood flow of arteries in lower limbs and ankle brachial index were taken under evaluation. Examination performed after Kinesio Taping therapy showed statistically significant increase in gait parameters and muscle strength in patients with intermittent claudication. The Kinesio Taping method has clinically significant effects on enhancement of pain-free distance and time until claudication pain in patients with peripheral artery disease. Kinesio Taping application can be used to support non-invasive treatment in patients with intermittent claudication. Kinesio Taping can be employed as an alternative way of therapy for patients with orthopedic or cardiac contraindications to be treated with treadmill training.

Keywords: intermittent claudication, kinesiotaping, peripheral artery disease, treadmill training

Procedia PDF Downloads 192
26415 Experimental Modal Analysis of Reinforced Concrete Square Slabs

Authors: M. S. Ahmed, F. A. Mohammad

Abstract:

The aim of this paper is to perform experimental modal analysis (EMA) of reinforced concrete (RC) square slabs. EMA is the process of determining the modal parameters (Natural Frequencies, damping factors, modal vectors) of a structure from a set of frequency response functions FRFs (curve fitting). Although experimental modal analysis (or modal testing) has grown steadily in popularity since the advent of the digital FFT spectrum analyzer in the early 1970’s, studying all members and materials using such method have not yet been well documented. Therefore, in this work, experimental tests were conducted on RC square specimens (0.6m x 0.6m with 40 mm). Experimental analysis is based on freely supported boundary condition. Moreover, impact testing as a fast and economical means of finding the modes of vibration of a structure was used during the experiments. In addition, Pico Scope 6 device and MATLAB software were used to acquire data, analyze and plot Frequency Response Function (FRF). The experimental natural frequencies which were extracted from measurements exhibit good agreement with analytical predictions. It is showed that EMA method can be usefully employed to perform the dynamic behavior of RC slabs.

Keywords: natural frequencies, mode shapes, modal analysis, RC slabs

Procedia PDF Downloads 396
26414 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets

Authors: Selin Guney, Andres Riquelme

Abstract:

Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.

Keywords: commodity, forecast, fuzzy, Markov

Procedia PDF Downloads 208
26413 Classification of Echo Signals Based on Deep Learning

Authors: Aisulu Tileukulova, Zhexebay Dauren

Abstract:

Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target.

Keywords: radar, neural network, convolutional neural network, echo signals

Procedia PDF Downloads 333
26412 An Improved Multiple Scattering Reflectance Model Based on Specular V-Cavity

Authors: Hongbin Yang, Mingxue Liao, Changwen Zheng, Mengyao Kong, Chaohui Liu

Abstract:

Microfacet-based reflection models are widely used to model light reflections for rough surfaces. Microfacet models have become the standard surface material building block for describing specular components with varying roughness; and yet, while they possess many desirable properties as well as produce convincing results, their design ignores important sources of scattering, which can cause a significant loss of energy. Specifically, they only simulate the single scattering on the microfacets and ignore the subsequent interactions. As the roughness increases, the interaction will become more and more important. So a multiple-scattering microfacet model based on specular V-cavity is presented for this important open problem. However, it spends much unnecessary rendering time because of setting the same number of scatterings for different roughness surfaces. In this paper, we design a geometric attenuation term G to compute the BRDF (Bidirectional reflection distribution function) of multiple scattering of rough surfaces. Moreover, we consider determining the number of scattering by deterministic heuristics for different roughness surfaces. As a result, our model produces a similar appearance of the objects with the state of the art model with significantly improved rendering efficiency. Finally, we derive a multiple scattering BRDF based on the original microfacet framework.

Keywords: bidirectional reflection distribution function, BRDF, geometric attenuation term, multiple scattering, V-cavity model

Procedia PDF Downloads 104
26411 Suitability Number of Coarse-Grained Soils and Relationships among Fineness Modulus, Density and Strength Parameters

Authors: Khandaker Fariha Ahmed, Md. Noman Munshi, Tarin Sultana, Md. Zoynul Abedin

Abstract:

Suitability number (SN) is perhaps one of the most important parameters of coarse-grained soil in assessing its appropriateness to use as a backfill in retaining structures, sand compaction pile, Vibro compaction, and other similar foundation and ground improvement works. Though determined in an empirical manner, it is imperative to study SN to understand its relation with other aggregate properties like fineness modulus (FM), and strength and density properties of sandy soil. The present paper reports the findings of the study on the examination of the properties of sandy soil, as mentioned. Random numbers were generated to obtain the percent fineness on various sieve sizes, and fineness modulus and suitability numbers were predicted. Sand samples were collected from the field, and test samples were prepared to determine maximum density, minimum density and shear strength parameter φ against particular fineness modulus and corresponding suitability number Five samples of SN value of excellent (0-10) and three samples of SN value fair (20-30) were taken and relevant tests were done. The data obtained from the laboratory tests were statistically analyzed. Results show that with the increase of SN, the value of FM decreases. Within the SN value rated as excellent (0-10), there is a decreasing trend of φ for a higher value of SN. It is found that SN is dependent on various combinations of grain size properties like D10, D30, and D20, D50. Strong linear relationships were obtained between SN and FM (R²=.0.93) and between SN value and φ (R²=.94). Correlation equations are proposed to define relationships among SN, φ, and FM.

Keywords: density, fineness modulus, shear strength parameter, suitability number

Procedia PDF Downloads 95
26410 Growth of Algal Biomass in Laboratory and in Pilot-Scale Algal Photobioreactors in the Temperate Climate of Southern Ireland

Authors: Linda A. O’Higgins, Astrid Wingler, Jorge Oliveira

Abstract:

The growth of Chlorella vulgaris was characterized as a function of irradiance in a laboratory turbidostat (1 L) and compared to batch growth in sunlit modules (5–25 L) of the commercial Phytobag photobioreactor. The effects of variable sunlight and culture density were deconvoluted by a mathematical model. The analysis showed that algal growth was light-limited due to shading by external construction elements and due to light attenuation within the algal bags. The model was also used to predict maximum biomass productivity. The manipulative experiments and the model predictions were confronted with data from a production season of a 10m2 pilot-scale photobioreactor, Phytobag (10,000 L). The analysis confirmed light limitation in all three photobioreactors. An additional limitation of biomass productivity was caused by the nitrogen starvation that was used to induce lipid accumulation. Reduction of shading and separation of biomass and lipid production are proposed for future optimization.

Keywords: microalgae, batch cultivation, Chlorella vulgaris, Mathematical model, photobioreactor, scale-up

Procedia PDF Downloads 89
26409 Evaluating Alternative Structures for Prefix Trees

Authors: Feras Hanandeh, Izzat Alsmadi, Muhammad M. Kwafha

Abstract:

Prefix trees or tries are data structures that are used to store data or index of data. The goal is to be able to store and retrieve data by executing queries in quick and reliable manners. In principle, the structure of the trie depends on having letters in nodes at the different levels to point to the actual words in the leafs. However, the exact structure of the trie may vary based on several aspects. In this paper, we evaluated different structures for building tries. Using datasets of words of different sizes, we evaluated the different forms of trie structures. Results showed that some characteristics may impact significantly, positively or negatively, the size and the performance of the trie. We investigated different forms and structures for the trie. Results showed that using an array of pointers in each level to represent the different alphabet letters is the best choice.

Keywords: data structures, indexing, tree structure, trie, information retrieval

Procedia PDF Downloads 444
26408 Data Management System for Environmental Remediation

Authors: Elizaveta Petelina, Anton Sizo

Abstract:

Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.

Keywords: data management, environmental remediation, geographic information system, GIS, decision making

Procedia PDF Downloads 142
26407 A Model for Reverse-Mentoring in Education

Authors: Sabine A. Zauchner-Studnicka

Abstract:

As the term indicates, reverse-mentoring flips the classical roles of mentoring: In school, students take over the role of mentors for adults, i.e. teachers or parents. Originally reverse-mentoring stems from US enterprises, which implemented this innovative method in order to benefit from the resources of skilled younger employees for the enhancement of IT competences of senior colleagues. However, reverse-mentoring in schools worldwide is rare. Based on empirical studies and theoretical approaches, in this article an implementation model for reverse-mentoring is developed in order to bring the significant potential reverse-mentoring has for education into practice.

Keywords: reverse-mentoring, innovation in education, implementation model, school education

Procedia PDF Downloads 240
26406 TiO₂ Nanoparticles Induce DNA Damage and Expression of Biomarker of Oxidative Stress on Human Spermatozoa

Authors: Elena Maria Scalisi

Abstract:

The increasing production and the use of TiO₂ nanoparticles (NPs) have inevitably led to their release into the environment, thereby posing a threat to organisms and also for human. Human exposure to TiO₂-NPs may occur during both manufacturing and use. TiO₂-NPs are common in consumer products for dermal application, toothpaste, food colorants, and nutritional supplements, then oral exposure may occur during use of such products. Into the body, TiO₂-NPs thanks to their small size (<100 nm), can, through testicular blood barrier inducing effect on testis and then on male reproductive health. The nanoscale size of TiO₂ increase the surface-to-volume ratio making them more reactive in a cell, then TiO₂ NPs increase their ability to produce reactive oxygen species (ROS). In male germ cells, ROS may have important implications in maintaining the normal functions of mature spermatozoa at physiological levels, moreover, in spermatozoa they are important signaling molecules for their hyperactivation and acrosome reaction. Nevertheless, an excess of ROS by external inputs such as NPs can increased the oxidative stress (OS), which results in damage DNA and apoptosis. The aim of our study has been investigate the impact of TiO₂ NPs on human spermatozoa, evaluating DNA damage and the expression of proteins involved in cell stress. According WHO guidelines 2021, we have exposed human spermatozoa in vitro to TiO₂ NP at concentrations 50 ppm, 100 ppm, 250 ppm, and 500 ppm for 1 hour (at 37°C and CO₂ at 5%). DNA damage was evaluated by Sperm Chromatin Dispersion Test (SCD) and TUNEL assay; moreover, we have evaluated the expression of biomarkers of oxidative stress like Heat Shock Protein 70 (HSP70) and Metallothioneins (MTs). Also, sperm parameters as motility viability have been evaluated. Our results not report a significant reduction in motility of spermatozoa at the end of the exposure. On the contrary, the progressive motility was increased at the highest concentration (500 ppm) and was statistically significant compared to control (p <0.05). Also, viability was not changed by exposure to TiO₂-NPs (p <0.05). However, increased DNA damage was observed at all concentrations, and the TUNEL assay highlighted the presence of single strand breaks in the DNA. The spermatozoa responded to the presence of TiO₂-NPs with the expression of Hsp70, which have a protective function because they allow the maintenance of cellular homeostasis in stressful/ lethal conditions. A positivity for MTs was observed mainly for the concentration of 4 mg/L. Although the biological and physiological function of the metallothionein (MTs) in the male genital organs is unclear, our results highlighted that the MTs expressed by spermatozoa maintain their biological role of detoxification from metals. Our results can give additional information to the data in the literature on the toxicity of TiO₂-NPs and reproduction.

Keywords: human spermatozoa, DNA damage, TiO₂-NPs, biomarkers

Procedia PDF Downloads 136
26405 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data

Authors: Bharat Singh Om Prakash Vyas

Abstract:

Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.

Keywords: ALS, NMF, high dimensional data, RMSE

Procedia PDF Downloads 332
26404 Opacity Synthesis with Orwellian Observers

Authors: Moez Yeddes

Abstract:

The property of opacity is widely used in the formal verification of security in computer systems and protocols. Opacity is a general language-theoretic scheme of many security properties of systems. Opacity is parametrized with framework in which several security properties of a system can be expressed. A secret behaviour of a system is opaque if a passive attacker can never deduce its occurrence from the system observation. Instead of considering the case of static observability where the set of observable events is fixed off-line or dynamic observability where the set of observable events changes over time depending on the history of the trace, we introduce Orwellian partial observability where unobservable events are not revealed provided that downgrading events never occurs in the future of the trace. Orwellian partial observability is needed to model intransitive information flow. This Orwellian observability is knwon as ipurge function. We show in previous work how to verify opacity for regular secret is opaque for a regular language L w.r.t. an Orwellian projection is PSPACE-complete while it has been proved undecidable even for a regular language L w.r.t. a general Orwellian observation function. In this paper, we address two problems of opacification of a regular secret ϕ for a regular language L w.r.t. an Orwellian projection: Given L and a secret ϕ ∈ L, the first problem consist to compute some minimal regular super-language M of L, if it exists, such that ϕ is opaque for M and the second consists to compute the supremal sub-language M′ of L such that ϕ is opaque for M′. We derive both language-theoretic characterizations and algorithms to solve these two dual problems.

Keywords: security policies, opacity, formal verification, orwellian observation

Procedia PDF Downloads 213
26403 Institutional Quality and Tax Compliance: A Cross-Country Regression Evidence

Authors: Debi Konukcu Onal, Tarkan Cavusoglu

Abstract:

In modern societies, the costs of public goods and services are shared through taxes paid by citizens. However, taxation has always been a frictional issue, as tax obligations are perceived to be a financial burden for taxpayers rather than being merit that fulfills the redistribution, regulation and stabilization functions of the welfare state. The tax compliance literature evolves into discussing why people still pay taxes in systems with low costs of legal enforcement. Related empirical and theoretical works show that a wide range of socially oriented behavioral factors can stimulate voluntary compliance and subversive effects as well. These behavioral motivations are argued to be driven by self-enforcing rules of informal institutions, either independently or through interactions with legal orders set by formal institutions. The main focus of this study is to investigate empirically whether institutional particularities have a significant role in explaining the cross-country differences in the tax noncompliance levels. A part of the controversy about the driving forces behind tax noncompliance may be attributed to the lack of empirical evidence. Thus, this study aims to fill this gap through regression estimates, which help to trace the link between institutional quality and noncompliance on a cross-country basis. Tax evasion estimates of Buehn and Schneider is used as the proxy measure for the tax noncompliance levels. Institutional quality is quantified by three different indicators (percentile ranks of Worldwide Governance Indicators, ratings of the International Country Risk Guide, and the country ratings of the Freedom in the World). Robust Least Squares and Threshold Regression estimates based on the sample of the Organization for Economic Co-operation and Development (OECD) countries imply that tax compliance increases with institutional quality. Moreover, a threshold-based asymmetry is detected in the effect of institutional quality on tax noncompliance. That is, the negative effects of tax burdens on compliance are found to be more pronounced in countries with institutional quality below a certain threshold. These findings are robust to all alternative indicators of institutional quality, supporting the significant interaction of societal values with the individual taxpayer decisions.

Keywords: institutional quality, OECD economies, tax compliance, tax evasion

Procedia PDF Downloads 119
26402 Software Transactional Memory in a Dynamic Programming Language at Virtual Machine Level

Authors: Szu-Kai Hsu, Po-Ching Lin

Abstract:

As more and more multi-core processors emerge, traditional sequential programming paradigm no longer suffice. Yet only few modern dynamic programming languages can leverage such advantage. Ruby, for example, despite its wide adoption, only includes threads as a simple parallel primitive. The global virtual machine lock of official Ruby runtime makes it impossible to exploit full parallelism. Though various alternative Ruby implementations do eliminate the global virtual machine lock, they only provide developers dated locking mechanism for data synchronization. However, traditional locking mechanism error-prone by nature. Software Transactional Memory is one of the promising alternatives among others. This paper introduces a new virtual machine: GobiesVM to provide a native software transactional memory based solution for dynamic programming languages to exploit parallelism. We also proposed a simplified variation of Transactional Locking II algorithm. The empirical results of our experiments show that support of STM at virtual machine level enables developers to write straightforward code without compromising parallelism or sacrificing thread safety. Existing source code only requires minimal or even none modi cation, which allows developers to easily switch their legacy codebase to a parallel environment. The performance evaluations of GobiesVM also indicate the difference between sequential and parallel execution is significant.

Keywords: global interpreter lock, ruby, software transactional memory, virtual machine

Procedia PDF Downloads 268