Search results for: fuzzy sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1892

Search results for: fuzzy sets

392 High Input Driven Factors in Idea Campaigns in Large Organizations: A Case Depicting Best Practices

Authors: Babar Rasheed, Saad Ghafoor

Abstract:

Introduction: Idea campaigns are commonly held across organizations for generating employee engagement. The contribution is specifically designed to identify and solve prevalent issues. It is argued that numerous organizations fail to achieve their desired goals despite arranging for such campaigns and investing heavily in them. There are however practices that organizations use to achieve higher degree of effectiveness, and these practices may be up for exploration by research to make them usable for the other organizations. Purpose: The aim of this research is to surface the idea management practices of a leading electric company with global operations. The study involves a large sized, multi site organization that is attributed to have added challenges in terms of managing ideas from employees, in comparison to smaller organizations. The study aims to highlight the factors that are looked at as the idea management team strategies for the campaign, sets terms and rewards for it, makes follow up with the employees and lastly, evaluate and award ideas. Methodology: The study is conducted in a leading electric appliance corporation that has a large number of employees and is based in numerous regions of the world. A total of 7 interviews are carried out involving the chief innovation officer, innovation manager and members of idea management and evaluation teams. The interviews are carried out either on Skype or in-person based on the availability of the interviewee. Findings: While this being a working paper and while the study is under way, it is anticipated that valuable information is being achieved about specific details on how idea management systems are governed and how idea campaigns are carried out. The findings may be particularly useful for innovation consultants as resources they can use to promote idea campaigning. The usefulness of the best practices highlighted as a result is, in any case, the most valuable output of this study.

Keywords: employee engagement, motivation, idea campaigns, large organizations, best practices, employees input, organizational output

Procedia PDF Downloads 174
391 Enhancing Learners' Metacognitive, Cultural and Linguistic Proficiency through Egyptian Series

Authors: Hanan Eltayeb, Reem Al Refaie

Abstract:

To be able to connect and relate to shows spoken in a foreign language, advanced learners must understand not only linguistics inferences but also cultural, metacognitive, and pragmatic connotations in colloquial Egyptian TV series. These connotations are needed to both understand the different facets of the dramas put before them, and they’re also consistently grown and formulated through watching these shows. The inferences have become a staple in the Egyptian colloquial culture over the years, making their way into day-to-day conversations as Egyptians use them to speak, relate, joke, and connect with each other, without having known one another from previous times. As for advanced learners, they need to understand these inferences not only to watch these shows, but also to be able to converse with Egyptians on a level that surpasses the formal, or standard. When faced with some of the somewhat recent shows on the Egyptian screens, learners faced challenges in understanding pragmatics, cultural, and religious background of the target language and consequently not able to interact effectively with a native speaker in real-life situations. This study aims to enhance the linguistic and cultural proficiency of learners through studying two genres of TV Colloquial Egyptian series. Study samples derived from two recent comedian and social Egyptian series ('The Seventh Neighbor' سابع جار, and 'Nelly and Sherihan' نيللي و شريهان). When learners watch such series, they are usually faced with a problem understanding inferences that have to do with social, religious, and political events that are addressed in the series. Using discourse analysis of the sematic, semantic, pragmatic, cultural, and linguistic characteristics of the target language, some major deductions were highlighted and repeated, showing a pattern in both. The research paper concludes that there are many sets of lingual and para-lingual phrases, idioms, and proverbs to be acquired and used effectively by teaching these series. The strategies adopted in the study can be applied to different types of media, like movies, TV shows, and even cartoons, to enhance student proficiency.

Keywords: Egyptian series, culture, linguistic competence, pragmatics, semantics, social

Procedia PDF Downloads 143
390 An Approach for Ensuring Data Flow in Freight Delivery and Management Systems

Authors: Aurelija Burinskienė, Dalė Dzemydienė, Arūnas Miliauskas

Abstract:

This research aims at developing the approach for more effective freight delivery and transportation process management. The road congestions and the identification of causes are important, as well as the context information recognition and management. The measure of many parameters during the transportation period and proper control of driver work became the problem. The number of vehicles per time unit passing at a given time and point for drivers can be evaluated in some situations. The collection of data is mainly used to establish new trips. The flow of the data is more complex in urban areas. Herein, the movement of freight is reported in detail, including the information on street level. When traffic density is extremely high in congestion cases, and the traffic speed is incredibly low, data transmission reaches the peak. Different data sets are generated, which depend on the type of freight delivery network. There are three types of networks: long-distance delivery networks, last-mile delivery networks and mode-based delivery networks; the last one includes different modes, in particular, railways and other networks. When freight delivery is switched from one type of the above-stated network to another, more data could be included for reporting purposes and vice versa. In this case, a significant amount of these data is used for control operations, and the problem requires an integrated methodological approach. The paper presents an approach for providing e-services for drivers by including the assessment of the multi-component infrastructure needed for delivery of freights following the network type. The construction of such a methodology is required to evaluate data flow conditions and overloads, and to minimize the time gaps in data reporting. The results obtained show the possibilities of the proposing methodological approach to support the management and decision-making processes with functionality of incorporating networking specifics, by helping to minimize the overloads in data reporting.

Keywords: transportation networks, freight delivery, data flow, monitoring, e-services

Procedia PDF Downloads 125
389 The Effect of Amendment of Soil with Rice Husk Charcoal Coated Urea and Rice Straw Compost on Nitrogen, Phosphorus and Potassium Leaching

Authors: D. A. S. Gamage, B. F. A. Basnayake, W. A. J. M. De Costa

Abstract:

Agriculture plays an important and strategic role in the performance of Sri Lankan national economy. Rice is the staple food of Sri Lankans thus; rice cultivation is the major agricultural activity of the country. In Sri Lanka, out of the total rice production, a considerable amount of rice straw and rice husk goes wasted. Hence, there is a great potential of production of quality compost and rice husk charcoal. The concept of making rice straw compost and rice husk charcoal is practicable in Sri Lanka, where more than 40% of the farmers are engaged in rice cultivation. The application of inorganic nitrogen fertilizer has become a burden to the country. Rice husk charcoal as a coating material to retain N fertilizer is a suitable solution to gradually release nitrogenous compounds. Objective of this study was to produce rice husk charcoal coated urea as a slow releasing fertilizer with rice straw compost and to compare the leaching losses of nitrogen, phosphorus and potassium using leaching columns. Leaching column studies were prepared using 1.2 m tall PVC pipes with a diameter of 15 cm and a sampling port was attached to the bottom end of the column-cap. Leachates (100 ml/leaching column) were obtained from two sets of (each set has four leaching columns) leaching columns. The sampling was done once a week for 3 month period. Rice husk charcoal coated urea can potentially be used as a slow releasing nitrogen fertilizer which reduces leaching losses of urea. It also helps reduce the phosphate and potassium leaching. The cyclic effect of phosphate release is an important finding which could be the central issue in defining microbial behavior in soils. The fluctuations of phosphate may have cyclic effects of 28 days. In addition, rice straw compost and rice husk charcoal coating is less costly and contribute to mitigate pollution of water bodies by inorganic fertilizers.

Keywords: leaching, mitigate, rice husk charcoal, slow releasing fertilizer

Procedia PDF Downloads 326
388 Sex and Sexuality Communication in African Families: The Dynamics of Openness and Closedness

Authors: Victorine Mbong Shu

Abstract:

Very little research exists on family sex and sexuality communication and identity formation and how communication is helping adolescents in forming their sexual identities in South Africa. This study is designed to explore the impact of sexual communication in African families and the dynamics that influence the openness or closedness of adolescent sexual identities. The primary objectives of this study are threefold; to understand how sexuality communication in African families impacts the closedness and/or openness of adolescent African identities; to explore the nature of African children's sexuality given the status of their families’ communications on sex; to describe how parental or adult sexual knowledge, attitudes, values of sex, etc. are translated to children in African families, if at all. This study seeks answers to challenges faced by African parents and caregivers of adolescent children in South Africa regarding sex-sexuality communication and their shifting identities in different spaces. Its outcome seeks to empower these families on how to continue to effectively communicate sex and sexuality at different stages and circumstances. Two sets of people are interviewed separately in order to explore issues of familial communication and how to understand how together with religion and culture, adolescents are socialised to form the social and gender identities that they take to adulthood. They were parents of adolescents and young adult children who spoke in retrospect on when they were adolescents. The results of this study will fill knowledge gaps considering the chosen theory of communication that gives clarity to the topic of sex and sexuality communication in African families in South Africa and the dynamics of privacy that influence identity formation. A subset of the 40 conversations, 5 female parents, 5 male parents, 5 young female adults, and 5 male young adults, was used for this analysis. The preliminary results revealed five emergent themes informed by research questions and the theoretical framework of this study: Open communication, Discomfort discussing sex and sexuality, The importance of sex communication to African parents, Factors influencing African families’ communication about sex and sexuality and Privacy and boundaries.

Keywords: sex, sexuality, communication, African families, adolescents

Procedia PDF Downloads 79
387 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm

Authors: Lydia Novozhilova, Vladimir Urazhdin

Abstract:

An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.

Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier

Procedia PDF Downloads 327
386 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 139
385 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 157
384 “A Watched Pot Never Boils.” Exploring the Impact of Job Autonomy on Organizational Commitment among New Employees: A Comprehensive Study of How Empowerment and Independence Influence Workplace Loyalty and Engagement in Early Career Stages

Authors: Atnafu Ashenef Wondim

Abstract:

In today’s highly competitive business environment, employees are considered a source of competitive advantage. Researchers have looked into job autonomy's effect on organizational commitment and declared superior organizational performance strongly depends on the effort and commitment of employees. The purpose of this study was to explore the relationship between job autonomy and organizational commitment from newcomer’s point of view. The mediation role of employee engagement (physical, emotional, and cognitive) was also examined in the case of Ethiopian Commercial Banks. An exploratory survey research design with mixed-method approach that included partial least squares structural equation modeling and Fuzzy-Set Qualitative Comparative Analysis technique were using to address the sample size of 348 new employees. In-depth interviews with purposive and convenientsampling techniques are conducted with new employees (n=43). The results confirmed that job autonomy had positive, significant direct effects on physical engagement, emotional engagement, and cognitive engagement (path coeffs. = 0.874, 0.931, and 0.893).The results showed thatthe employee engagement driver, physical engagement, had a positive significant influence on affective commitment (path coeff. = 0.187) and normative commitment (path coeff. = 0.512) but no significant effect on continuance commitment. Employee engagement partially mediates the relationship between job autonomy and organizational commitment, which means supporting the indirect effects of job autonomy on affective, continuance, and normative commitment through physical engagement. The findings of this study add new perspectives by positioning it within a complex organizational African setting and by expanding the job autonomy and organizational commitment literature, which will benefit future research. Much of the literature on job autonomy and organizational commitment has been conducted within a well-established organizational business context in Western developed countries.The findings lead to fresh information on job autonomy and organizational commitment implementation enablers that can assist in the formulation of a better policy/strategy to efficiently adopt job autonomy and organizational commitment.

Keywords: employee engagement, job autonomy, organizational commitment, social exchange theory

Procedia PDF Downloads 30
383 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 344
382 System Identification of Building Structures with Continuous Modeling

Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab

Abstract:

This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.

Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction

Procedia PDF Downloads 233
381 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 324
380 Language Politics and Identity in Translation: From a Monolingual Text to Multilingual Text in Chinese Translations

Authors: Chu-Ching Hsu

Abstract:

This paper focuses on how the government-led language policies and the political changes in Taiwan manipulate the languages choice in translations and what translation strategies are employed by the translator to show his or her language ideology behind the power struggles and decision-making. Therefore, framed by Lefevere’s theoretical concept of translating as rewriting, and carried out a diachronic and chronological study, this paper specifically sets out to investigate the language ideology and translator’s idiolect of Chinese language translations of Anglo-American novels. The examples drawn to explore these issues were taken from different versions of Chinese renditions of Mark Twain’s English-language novel The Adventures of Huckleberry Finn in which there are several different dialogues originally written in the colloquial language and dialect used in the American state of Mississippi and reproduced in Mark Twain’s works. Also, adapted corpus methodology, many examples are extracted as instances from the translated texts and source text, to illuminate how the translators in Taiwan deal with the dialectal features encoded in Twain’s works, and how different versions of Chinese translations are employed by Taiwanese translators to confirm the language polices and to express their language identity textually in different periods of the past five decades, from the 1960s onward. The finding of this study suggests that the use of Taiwanese dialect and language patterns in translations does relate to the movement of the mother-tongue language and language ideology of the translator as well as to the issue of language identity raised in the island of Taiwan. Furthermore, this study confirms that the change of political power in Taiwan does bring significantly impact in language policy-- assimilationism, pluralism or multiculturalism, which also makes Taiwan from a monolingual to multilingual society, where the language ideology and identity can be revealed not only in people’s daily communication but also in written translations.

Keywords: language politics and policies, literary translation, mother-tongue, multiculturalism, translator’s ideology

Procedia PDF Downloads 394
379 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index

Procedia PDF Downloads 135
378 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second

Authors: P. V. Pramila , V. Mahesh

Abstract:

Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients esulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF 25, PEF,FEF 25-75, FEF50, and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF 25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects). It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.

Keywords: FEV, multivariate adaptive regression splines pulmonary function test, random forest

Procedia PDF Downloads 310
377 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data

Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali

Abstract:

The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.

Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors

Procedia PDF Downloads 69
376 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics

Authors: Julia Zimmerman, Gaurav Savant

Abstract:

This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.

Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling

Procedia PDF Downloads 199
375 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 393
374 Global Solar Irradiance: Data Imputation to Analyze Complementarity Studies of Energy in Colombia

Authors: Jeisson A. Estrella, Laura C. Herrera, Cristian A. Arenas

Abstract:

The Colombian electricity sector has been transforming through the insertion of new energy sources to generate electricity, one of them being solar energy, which is being promoted by companies interested in photovoltaic technology. The study of this technology is important for electricity generation in general and for the planning of the sector from the perspective of energy complementarity. Precisely in this last approach is where the project is located; we are interested in answering the concerns about the reliability of the electrical system when climatic phenomena such as El Niño occur or in defining whether it is viable to replace or expand thermoelectric plants. Reliability of the electrical system when climatic phenomena such as El Niño occur, or to define whether it is viable to replace or expand thermoelectric plants with renewable electricity generation systems. In this regard, some difficulties related to the basic information on renewable energy sources from measured data must first be solved, as these come from automatic weather stations. Basic information on renewable energy sources from measured data, since these come from automatic weather stations administered by the Institute of Hydrology, Meteorology and Environmental Studies (IDEAM) and, in the range of study (2005-2019), have significant amounts of missing data. For this reason, the overall objective of the project is to complete the global solar irradiance datasets to obtain time series to develop energy complementarity analyses in a subsequent project. Global solar irradiance data sets to obtain time series that will allow the elaboration of energy complementarity analyses in the following project. The filling of the databases will be done through numerical and statistical methods, which are basic techniques for undergraduate students in technical areas who are starting out as researchers technical areas who are starting out as researchers.

Keywords: time series, global solar irradiance, imputed data, energy complementarity

Procedia PDF Downloads 71
373 A New Direction of Urban Regeneration: Form-Based Urban Reconstruction through the Idea of Bricolage

Authors: Hyejin Song, Jin Baek

Abstract:

Based on the idea of bricolage that a new meaning beyond that of each of objects can be created through combination and juxtaposition of various objets, this study finds a way of morphological-recomposing of urban space through combination and juxtaposition of existing urban fabric and new fabric and suggests this idea as new direction of urban regeneration. This study sets concept of bricolage as a philosophical ground of interpreting contemporary urban situation. In this concept, urban objects such as buildings from various zeitgeists are positively considered as potential textures which can construct meaningful context. Seoul, as the city having long history and experiencing colonization and development, appears dynamic urban structure full of various objects from various periods. However, in contrast with successful plazas and streets in Europe, objects in Seoul do not make a meaningful context as public space due to thoughtless development. This study defines this situation as ‘disorgnized-fabric’. Following the concept of bricolage, to find the way for those existing scattered objects to be organized as a context of meaningful public space, this study firstly researches the case of successful public space by morphological analysis. Secondly, this study carefully explores urban space in Seoul, and draws figure-ground diagram to grasp the form of current urban fabric by various urban-objects. As a result of exploration, a lot of urban spaces from Myeong-dong, one of vibrant commercial district in Seoul, to declining residential area are judged as having potential fabric which can become meaningful context by just small adjustment of relationship between existing objects. This study also confirmed that by inserting a new object with consideration of form of existing fabric, it is possible to accord a new context as plaza to existing void which have broken as several parts. This study defines it as form-based urban reconstruction through the idea of bricolage, and suggests that it could be one of philosophical ground of successful urban regeneration.

Keywords: adjustment of relationship between existing objets, bricolage, morphological analysis of urban fabric, urban regeneration, urban reconstruction

Procedia PDF Downloads 318
372 A Multi-Criteria Decision Method for the Recruitment of Academic Personnel Based on the Analytical Hierarchy Process and the Delphi Method in a Neutrosophic Environment

Authors: Antonios Paraskevas, Michael Madas

Abstract:

For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to the exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes the multi-criteria nature of the problem and how decision-makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of a significant degree of ambiguity and indeterminacy observed in the decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies the Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method for a real problem of academic personnel selection, having as the main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherent ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.

Keywords: multi-criteria decision making methods, analytical hierarchy process, delphi method, personnel recruitment, neutrosophic set theory

Procedia PDF Downloads 117
371 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 91
370 Nabokov’s Lolita: Externalization of Contemporary Mind in the Configuration of Hedonistic Aesthetics

Authors: Saima Murtaza

Abstract:

Ethics and aesthetics have invariably remained the two closely integrated artistic appurtenances for the production of any work of art. These artistic devices configure themselves into a complex synthesis in our contemporary literature. The labyrinthine integration of ethics and aesthetics, operating in the lives of human characters, to the extent of transcending all limits has resulted in an artistic puzzle for the readers. Art, no doubt, is an extrinsic expression of the intrinsic life of man. The use of aesthetics in literature pertaining to human existence; aesthetic solipsism, has resulted in the artistic objectification of these characters. The practice of the like aestheticism deprives the characters of their souls, rendering them as mere objects of aesthetic gaze at the hands of their artists-creators. Artists orchestrate their lives founding it on a plot which deviates from normal social and ethical standards. Their perverse attitude can be seen in dealing with characters, their feelings and the incidents of their lives. Morality is made to appear not as a religious construct but as an individual’s private affair. Furthermore, the idea of beauty incarnated, in other words hedonistic aesthetic does not placate a true aesthete. Ethics and aesthetics are the two most recurring motifs of our contemporary literature, especially of Nabokov’s world. The purpose of this study is to peruse these aforementioned motifs in Nabokov’s most enigmatic novel Lolita, a story of pedophilia, which is in fact reflective of our complex individual psychic and societal patterns. The narrative subverts all the traditional and hitherto known notions of aesthetics and ethics. When applied to literature, aesthetic does not simply mean ‘beautiful’ in the text. It refers to an intricate relationship between feelings and perception and also incorporates within its range wide-ranging emotional reactions to text. The term aesthetics in literature is connected with the readers whose critical responses to the text determine the merit of any work to be really a piece of art. Aestheticism is the child of ethics. Morality sets the grounds for the production of any work and the idea of aesthetics gives it transcendence.

Keywords: ethics, aesthetics and hedonistic aesthetic, nymphet syndrome, pedophilia

Procedia PDF Downloads 158
369 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background

Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong

Abstract:

Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.

Keywords: deep learning, image fusion, image generation, layout analysis

Procedia PDF Downloads 157
368 Evaluate the Effect of Teaching Small Scale Bussiness and Entrepreneurship on Graduates Unemployment in Nigeria: A Case Study of Anambra and Enugu State, South East Nigeria

Authors: Erinma Chibuzo Nwandu

Abstract:

Graduates unemployment has risen astronomically in spite of the emphasis on teaching of small scale business and Entrepreneurship in schools. This study sets out to evaluate the effect of teaching small scale business and Entrepreneurship on graduates’ unemployment in Nigeria. This study adopted the survey research design. Thus the nature of data for this study is primary, sourced by the use of a questionnaire administered to a sample of two thousand and sixty-five (2065) respondents drawn from groups of graduates who are employed, unemployed and self-employed in South East Nigeria. Simple percentages, Chi-square and regression analysis were used to derive useful and meaningful information and test the hypotheses respectively. Findings from the study suggest that Nigeria graduates are ill prepared to embark on small-scale business and entrepreneurship after graduation, and that teaching of small scale business and entrepreneurship in Nigeria tertiary institutions is ineffective on graduate unemployment reduction. Findings also suggest that while a lot of graduates agreed that they have taken a class(s) on small scale or entrepreneurship, they received more theoretical teachings than practical, more so while teachings on small scale business or entrepreneurship motivated graduates to think of self-employment, most of them cannot do a good business plan and hence could not benefit from some kind of Government assisted program for small-scale business and bank loan for the sake of small scale business. Thus, so many graduates are not interested in small scale business or entrepreneurship development as a result of lack of startup capital. The study thus recommends that course content and teaching method of entrepreneurship education needs to be reviewed and re-structured to constitute more practical teachings than theoretical teachings. Also, graduates should be exposed to seminar /workshop for self-employment at least once every semester. There should be practical teaching and practice of developing a business plan that will be viable to attract government or private sponsorship as well for it to be viable to attract financing from financing institutions. Government should provide a fund such as venture capital financing arrangement to empower business startups in Nigeria by graduates’.

Keywords: entrepreneurship, small scale business, startup capital, unemployment

Procedia PDF Downloads 280
367 A Multi-criteria Decision Method For The Recruitment Of Academic Personnel Based On The Analytical Hierarchy Process And The Delphi Method In A Neutrosophic Environment (Full Text)

Authors: Antonios Paraskevas, Michael Madas

Abstract:

For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.

Keywords: analytical hierarchy process, delphi method, multi-criteria decision maiking method, neutrosophic set theory, personnel recruitment

Procedia PDF Downloads 200
366 Moved by Music: The Impact of Music on Fatigue, Arousal and Motivation During Conditioning for High to Elite Level Female Artistic Gymnasts

Authors: Chante J. De Klerk

Abstract:

The potential of music to facilitate superior performance during high to elite level gymnastics conditioning instigated this research. A team of seven gymnasts completed a fixed conditioning programme eight times, alternating the two variable conditions. Four sessions of each condition were conducted: without music (session 1), with music (session 2), without music (3), with music (4), without music (5), and so forth. Quantitative data were collected in both conditions through physiological monitoring of the gymnasts, and administration of the Situational Motivation Scale (SIMS). Statistical analysis of the physiological data made it possible to quantify the presence as well as the magnitude of the musical intervention’s impact on various aspects of the gymnasts' physiological functioning during conditioning. The SIMS questionnaire results were used to evaluate if their motivation towards conditioning was altered by the intervention. Thematic analysis of qualitative data collected through semi-structured interviews revealed themes reflecting the gymnasts’ sentiments towards the data collection process. Gymnast-specific descriptions and experiences of the team as a whole were integrated with the quantitative data to facilitate greater dimension in establishing the impact of the intervention. The results showed positive physiological, motivational, and emotional effects. In the presence of music, superior sympathetic nervous activation, and energy efficiency, with more economic breathing, dominated the physiological data. Fatigue and arousal levels (emotional and physiological) were also conducive to improved conditioning outcomes compared to conventional conditioning (without music). Greater levels of positive affect and motivation emerged in analysis of both the SIMS and interview data sets. Overall, the intervention was found to promote psychophysiological coherence during the physical activity. In conclusion, a strategically constructed musical intervention, designed to accompany a gymnastics conditioning session for high to elite level gymnasts, has ergogenic potential.

Keywords: arousal, fatigue, gymnastics conditioning, motivation, musical intervention, psychophysiological coherence

Procedia PDF Downloads 94
365 Facilitating Factors for the Success of Mobile Service Providers in Bangkok Metropolitan

Authors: Yananda Siraphatthada

Abstract:

The objectives of this research were to study the level of influencing factors, leadership, supply chain management, innovation, competitive advantages, business success, and affecting factors to the business success of the mobile phone system service providers in Bangkok Metropolitan. This research was done by the quantitative approach and the qualitative approach. The quantitative approach was used for questionnaires to collect data from the 331 mobile service shop managers franchised by AIS, Dtac and TrueMove. The mobile phone system service providers/shop managers were randomly stratified and proportionally allocated into subgroups exclusive to the number of the providers in each network. In terms of qualitative method, there were in-depth interviews of 6 mobile service providers/managers of Telewiz and Dtac and TrueMove shop to find the agreement or disagreement with the content analysis method. Descriptive Statistics, including Frequency, Percentage, Means and Standard Deviation were employed; also, the Structural Equation Model (SEM) was used as a tool for data analysis. The content analysis method was applied to identify key patterns emerging from the interview responses. The two data sets were brought together for comparing and contrasting to make the findings, providing triangulation to enrich result interpretation. It revealed that the level of the influencing factors – leadership, innovation management, supply chain management, and business competitiveness had an impact at a great level, but that the level of factors, innovation and the business, financial success and nonbusiness financial success of the mobile phone system service providers in Bangkok Metropolitan, is at the highest level. Moreover, the business influencing factors, competitive advantages in the business of mobile system service providers which were leadership, supply chain management, innovation management, business advantages, and business success, had statistical significance at .01 which corresponded to the data from the interviews.

Keywords: mobile service providers, facilitating factors, Bangkok Metropolitan, business success

Procedia PDF Downloads 348
364 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 155
363 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications

Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert

Abstract:

Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.

Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores

Procedia PDF Downloads 141