Search results for: hierarchical text classification models
7644 Flexural Behavior of Composite Hybrid Beam Models Combining Steel Inverted T-Section and RC Flange
Authors: Abdul Qader Melhem, Hacene Badache
Abstract:
This paper deals with the theoretical and experimental study of shear connection via simple steel reinforcement shear connectors, which are steel reinforcing bars bent into L-shapes, instead of commonly used headed studs. This suggested L-shape connectors are readily available construction material in steel reinforcement. The composite section, therefore, consists of steel inverted T-section being embedded within a lightly reinforced concrete flange at the top slab as a unit. It should be noted that the cross section of these composite models involves steel inverted T-beam, replacing the steel top flange of a standard commonly employed I-beam section. The paper concentrates on the elastic and elastic-plastic behavior of these composite models. Failure modes either by cracking of concrete or shear connection be investigated in details. Elastic and elastoplastic formulas of the composite model have been computed for different locations of NA. Deflection formula has been derived, its value was close to the test value. With a supportive designing curve, this curve is valuable for both designing engineers and researchers. Finally, suggested designing curves and valuable equations will be presented. A check is made between theoretical and experimental outcomes.Keywords: composite, elastic-plastic, failure, inverted T-section, L-Shape connectors
Procedia PDF Downloads 2277643 Analysis of Expert Information in Linguistic Terms
Authors: O. Poleshchuk, E. Komarov
Abstract:
In this paper, semantic spaces with the properties of completeness and orthogonality (complete orthogonal semantic spaces) were chosen as models of expert evaluations. As the theoretical and practical studies have shown all the properties of complete orthogonal semantic spaces correspond to the thinking activity of experts that is why these semantic spaces were chosen for modeling. Two methods of construction such spaces were proposed. Models of comparative and fuzzy cluster analysis of expert evaluations were developed. The practical application of the developed methods has demonstrated their viability and validity.Keywords: expert evaluation, comparative analysis, fuzzy cluster analysis, theoretical and practical studies
Procedia PDF Downloads 5317642 Proposal of Design Method in the Semi-Acausal System Model
Authors: Shigeyuki Haruyama, Ken Kaminishi, Junji Kaneko, Tadayuki Kyoutani, Siti Ruhana Omar, Oke Oktavianty
Abstract:
This study is used as a definition method to the value and function in manufacturing sector. In concurrence of discussion about present condition of modeling method, until now definition of 1D-CAE is ambiguity and not conceptual. Across all the physics fields, those methods are defined with the formulation of differential algebraic equation which only applied time derivation and simulation. At the same time, we propose semi-acausal modeling concept and differential algebraic equation method as a newly modeling method which the efficiency has been verified through the comparison of numerical analysis result between the semi-acausal modeling calculation and FEM theory calculation.Keywords: system model, physical models, empirical models, conservation law, differential algebraic equation, object-oriented
Procedia PDF Downloads 4857641 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique
Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu
Abstract:
Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing
Procedia PDF Downloads 1017640 A Multi-criteria Decision Method For The Recruitment Of Academic Personnel Based On The Analytical Hierarchy Process And The Delphi Method In A Neutrosophic Environment (Full Text)
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.Keywords: analytical hierarchy process, delphi method, multi-criteria decision maiking method, neutrosophic set theory, personnel recruitment
Procedia PDF Downloads 2007639 A Neural Network Approach to Understanding Turbulent Jet Formations
Authors: Nurul Bin Ibrahim
Abstract:
Advancements in neural networks have offered valuable insights into Fluid Dynamics, notably in addressing turbulence-related challenges. In this research, we introduce multiple applications of models of neural networks, namely Feed-Forward and Recurrent Neural Networks, to explore the relationship between jet formations and stratified turbulence within stochastically excited Boussinesq systems. Using machine learning tools like TensorFlow and PyTorch, the study has created models that effectively mimic and show the underlying features of the complex patterns of jet formation and stratified turbulence. These models do more than just help us understand these patterns; they also offer a faster way to solve problems in stochastic systems, improving upon traditional numerical techniques to solve stochastic differential equations such as the Euler-Maruyama method. In addition, the research includes a thorough comparison with the Statistical State Dynamics (SSD) approach, which is a well-established method for studying chaotic systems. This comparison helps evaluate how well neural networks can help us understand the complex relationship between jet formations and stratified turbulence. The results of this study underscore the potential of neural networks in computational physics and fluid dynamics, opening up new possibilities for more efficient and accurate simulations in these fields.Keywords: neural networks, machine learning, computational fluid dynamics, stochastic systems, simulation, stratified turbulence
Procedia PDF Downloads 707638 Nonlinear Model Predictive Control of Water Quality in Drinking Water Distribution Systems with DBPs Objetives
Authors: Mingyu Xie, Mietek Brdys
Abstract:
The paper develops a non-linear model predictive control (NMPC) of water quality in drinking water distribution systems (DWDS) based on the advanced non-linear quality dynamics model including disinfections by-products (DBPs). A special attention is paid to the analysis of an impact of the flow trajectories prescribed by an upper control level of the recently developed two-time scale architecture of an integrated quality and quantity control in DWDS. The new quality controller is to operate within this architecture in the fast time scale as the lower level quality controller. The controller performance is validated by a comprehensive simulation study based on an example case study DWDS.Keywords: model predictive control, hierarchical control structure, genetic algorithm, water quality with DBPs objectives
Procedia PDF Downloads 3177637 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium
Authors: Janne Engblom, Elias Oikarinen
Abstract:
The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.Keywords: dynamic model, panel data, cross-sectional dependence, interaction model
Procedia PDF Downloads 2517636 The Making of a Yijing (Classic of Changes) Cultural Sphere in Asia
Authors: Ng Wai Ming
Abstract:
The Yijing (Classic of Changes) is one of the most influential Chinese classics, and its text, images and divination have been widely studied and used by different people in the world from past to present. Its impact in Asia has been particularly strong due to cultural and geographical proximity. Based on many years of textual study of the history of the Yijing in the Sinosphere, the author attempts to identify various levels of acceptance and localization of the Yijing in different Asian regions, including Japan, Korea, the Ryukyu Kingdom, Vietnam, Mongolia and Tibet. It will create a new concept of “Yijing cultural sphere” to explain the popularization and indigenization of the Yijing in Asia.Keywords: classic of changes, asia, sinosphere, localization
Procedia PDF Downloads 627635 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 1367634 Lung HRCT Pattern Classification for Cystic Fibrosis Using a Convolutional Neural Network
Authors: Parisa Mansour
Abstract:
Cystic fibrosis (CF) is one of the most common autosomal recessive diseases among whites. It mostly affects the lungs, causing infections and inflammation that account for 90% of deaths in CF patients. Because of this high variability in clinical presentation and organ involvement, investigating treatment responses and evaluating lung changes over time is critical to preventing CF progression. High-resolution computed tomography (HRCT) greatly facilitates the assessment of lung disease progression in CF patients. Recently, artificial intelligence was used to analyze chest CT scans of CF patients. In this paper, we propose a convolutional neural network (CNN) approach to classify CF lung patterns in HRCT images. The proposed network consists of two convolutional layers with 3 × 3 kernels and maximally connected in each layer, followed by two dense layers with 1024 and 10 neurons, respectively. The softmax layer prepares a predicted output probability distribution between classes. This layer has three exits corresponding to the categories of normal (healthy), bronchitis and inflammation. To train and evaluate the network, we constructed a patch-based dataset extracted from more than 1100 lung HRCT slices obtained from 45 CF patients. Comparative evaluation showed the effectiveness of the proposed CNN compared to its close peers. Classification accuracy, average sensitivity and specificity of 93.64%, 93.47% and 96.61% were achieved, indicating the potential of CNNs in analyzing lung CF patterns and monitoring lung health. In addition, the visual features extracted by our proposed method can be useful for automatic measurement and finally evaluation of the severity of CF patterns in lung HRCT images.Keywords: HRCT, CF, cystic fibrosis, chest CT, artificial intelligence
Procedia PDF Downloads 657633 Understanding the Role of Social Entrepreneurship in Building Mobility of a Service Transportation Models
Authors: Liam Fassam, Pouria Liravi, Jacquie Bridgman
Abstract:
Introduction: The way we travel is rapidly changing, car ownership and use are declining among young people and those residents in urban areas. Also, the increasing role and popularity of sharing economy companies like Uber highlight a movement towards consuming transportation solutions as a service [Mobility of a Service]. This research looks to bridge the knowledge gap that exists between city mobility, smart cities, sharing economy and social entrepreneurship business models. Understanding of this subject is crucial for smart city design, as access to affordable transport has been identified as a contributing factor to social isolation leading to issues around health and wellbeing. Methodology: To explore the current fit vis-a-vis transportation business models and social impact this research undertook a comparative analysis between a systematic literature review and a Delphi study. The systematic literature review was undertaken to gain an appreciation of the current academic thinking on ‘social entrepreneurship and smart city mobility’. The second phase of the research initiated a Delphi study across a group of 22 participants to review future opinion on ‘how social entrepreneurship can assist city mobility sharing models?’. The Delphi delivered an initial 220 results, which once cross-checked for duplication resulted in 130. These 130 answers were sent back to participants to score importance against a 5-point LIKERT scale, enabling a top 10 listing of areas for shared user transports in society to be gleaned. One further round (4) identified no change in the coefficient of variant thus no further rounds were required. Findings: Initial results of the literature review returned 1,021 journals using the search criteria ‘social entrepreneurship and smart city mobility’. Filtering allied to ‘peer review’, ‘date’, ‘region’ and ‘Chartered associated of business school’ ranking proffered a resultant journal list of 75. Of these, 58 focused on smart city design, 9 on social enterprise in cityscapes, 6 relating to smart city network design and 3 on social impact, with no journals purporting the need for social entrepreneurship to be allied to city mobility. The future inclusion factors from the Delphi expert panel indicated that smart cities needed to include shared economy models in their strategies. Furthermore, social isolation born by costs of infrastructure needed addressing through holistic A-political social enterprise models, and a better understanding of social benefit measurement is needed. Conclusion: In investigating the collaboration between key public transportation stakeholders, a theoretical model of social enterprise transportation models that positively impact upon the smart city needs of reduced transport poverty and social isolation was formed. As such, the research has identified how a revised business model of Mobility of a Service allied to a social entrepreneurship can deliver impactful measured social benefits associated to smart city design existent research.Keywords: social enterprise, collaborative transportation, new models of ownership, transport social impact
Procedia PDF Downloads 1417632 Modeling and Benchmarking the Thermal Energy Performance of Palm Oil Production Plant
Authors: Mathias B. Michael, Esther T. Akinlabi, Tien-Chien Jen
Abstract:
Thermal energy consumption in palm oil production plant comprises mainly of steam, hot water and hot air. In most efficient plants, hot water and air are generated from the steam supply system. Research has shown that thermal energy utilize in palm oil production plants is about 70 percent of the total energy consumption of the plant. In order to manage the plants’ energy efficiently, the energy systems are modelled and optimized. This paper aimed to present the model of steam supply systems of a typical palm oil production plant in Ghana. The models include exergy and energy models of steam boiler, steam turbine and the palm oil mill. The paper further simulates the virtual plant model to obtain the thermal energy performance of the plant under study. The simulation results show that, under normal operating condition, the boiler energy performance is considerably below the expected level as a result of several factors including intermittent biomass fuel supply, significant moisture content of the biomass fuel and significant heat losses. The total thermal energy performance of the virtual plant is set as a baseline. The study finally recommends number of energy efficiency measures to improve the plant’s energy performance.Keywords: palm biomass, steam supply, exergy and energy models, energy performance benchmark
Procedia PDF Downloads 3507631 Practical Modelling of RC Structural Walls under Monotonic and Cyclic Loading
Authors: Reza E. Sedgh, Rajesh P. Dhakal
Abstract:
Shear walls have been used extensively as the main lateral force resisting systems in multi-storey buildings. The recent development in performance based design urges practicing engineers to conduct nonlinear static or dynamic analysis to evaluate seismic performance of multi-storey shear wall buildings by employing distinct analytical models suggested in the literature. For practical purpose, application of macroscopic models to simulate the global and local nonlinear behavior of structural walls outweighs the microscopic models. The skill level, computational time and limited access to RC specialized finite element packages prevents the general application of this method in performance based design or assessment of multi-storey shear wall buildings in design offices. Hence, this paper organized to verify capability of nonlinear shell element in commercially available package (Sap2000) in simulating results of some specimens under monotonic and cyclic loads with very oversimplified available cyclic material laws in the analytical tool. The selection of constitutive models, the determination of related parameters of the constituent material and appropriate nonlinear shear model are presented in detail. Adoption of proposed simple model demonstrated that the predicted results follow the overall trend of experimental force-displacement curve. Although, prediction of ultimate strength and the overall shape of hysteresis model agreed to some extent with experiment, the ultimate displacement(significant strength degradation point) prediction remains challenging in some cases.Keywords: analytical model, nonlinear shell element, structural wall, shear behavior
Procedia PDF Downloads 4047630 Seafloor and Sea Surface Modelling in the East Coast Region of North America
Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk
Abstract:
Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.Keywords: seafloor, sea surface height, bathymetry, satellite altimetry
Procedia PDF Downloads 807629 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties
Authors: G. Martino, F. Silva, E. Marchal
Abstract:
The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.Keywords: clusterization and classification algorithms, integrated planning, mathematical modeling, optimization, penalty minimization
Procedia PDF Downloads 1237628 Facilitating Waste Management to Achieve Sustainable Residential Built Environments
Authors: Ingy Ibrahim El-Darwish, Neveen Youssef Azmy
Abstract:
The endowment of a healthy environment can be implemented by endorsing sustainable fundamentals. Design of sustainable buildings through recycling of waste, can reduce health problems, provide good environments and contribute to the aesthetically pleasing entourage. Such environments can help in providing energy-saving alternatives to consolidate the principles of sustainability. The poor community awareness and the absence of laws and legislation in Egypt for waste management specifically in residential areas have led to an inability to provide an integrated system for waste management in urban and rural areas. Many problems and environmental challenges face the Egyptian urban environments. From these problems, is the lack of a cohesive vision for waste collection and recycling for energy-saving. The second problem is the lack public awareness of the short term and long term vision of waste management. Bad practices have adversely affected the efficiency of environmental management systems due to lack of urban legislations that codify collection and recycling of residential communities in Egyptian urban environments. Hence, this research tries to address residents on waste management matters to facilitate legislative process on waste collection and classification within residential units and outside them in a preparation phase for recycling in the Egyptian urban environments. In order to achieve this goal, one of the Egyptian communities has been addressed, analyzed and studied. Waste collection, classification, separation and access to recycling places in the urban city are proposed in preparation for a legislation ruling and regulating the process. Hence, sustainable principles are to be achieved.Keywords: recycling, residential buildings, sustainability, waste
Procedia PDF Downloads 3277627 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University
Authors: Saadia Elamin
Abstract:
The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools
Procedia PDF Downloads 4737626 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course
Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu
Abstract:
This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN
Procedia PDF Downloads 447625 Vibrations of Springboards: Mode Shape and Time Domain Analysis
Authors: Stefano Frassinelli, Alessandro Niccolai, Riccardo E. Zich
Abstract:
Diving is an important Olympic sport. In this sport, the effective performance of the athlete is related to his capability to interact correctly with the springboard. In fact, the elevation of the jump and the correctness of the dive are influenced by the vibrations of the board. In this paper, the vibrations of the springboard will be analyzed by means of typical tools for vibration analysis: Firstly, a modal analysis will be done on two different models of the springboard, then, these two model and another one will be analyzed with a time analysis, done integrating the equations of motion od deformable bodies. All these analyses will be compared with experimental data measured on a real springboard by means of a 6-axis accelerometer; these measurements are aimed to assess the models proposed. The acquired data will be analyzed both in frequency domain and in time domain.Keywords: springboard analysis, modal analysis, time domain analysis, vibrations
Procedia PDF Downloads 4607624 Predicting the Effect of Silicon Electrode Design Parameters on Thermal Performance of a Lithium-Ion Battery
Authors: Harika Dasari, Eric Eisenbraun
Abstract:
The present study models the role of electrode structural characteristics on the thermal behavior of lithium-ion batteries. Preliminary modeling runs have employed a 1D lithium-ion battery coupled to a two-dimensional axisymmetric model using silicon as the battery anode material. The two models are coupled by the heat generated and the average temperature. Our study is focused on the silicon anode particle sizes and it is observed that silicon anodes with nano-sized particles reduced the temperature of the battery in comparison to anodes with larger particles. These results are discussed in the context of the relationship between particle size and thermal transport properties in the electrode.Keywords: particle size, NMC, silicon, heat generation, separator
Procedia PDF Downloads 2907623 Predictors of Social Participation of Children with Cerebral Palsy in Primary Schools in Czech Republic
Authors: Marija Zulić, Vanda Hájková, Nina Brkić-Jovanović, Linda Rathousová, Sanja Tomić
Abstract:
Cerebral palsy is primarily reflected in the disorder of the development of movement and posture, which may be accompanied by sensory disturbances, disturbances of perception, cognition and communication, behavioural disorders and epilepsy. According to current inclusive attitudes towards people with disabilities implies that full social participation of children with cerebral palsy means inclusion in all activities in family, peer, school and leisure environments in the same scope and to the same extent as is the case with the children of proper development and without physical difficulties. Due to the fact that it has been established that the quality of children's participation in primary school is directly related to their social inclusion in future life, the aim of the paper is to identify predictors of social participation, respectively, and in particular, factors that could to improve the quality of social participation of children with cerebral palsy, in the primary school environment in Czech Republic. The study includes children with cerebral palsy (n = 75) in the Czech Republic, aged between six and 12 years who attend mainstream or special primary schools to the sixth grade. The main instrument used was the first and third part of the School function assessment questionnaire. It will also take into account the type of damage assessed according to a scale the Gross motor function classification system, five–level classification system for cerebral palsy. The research results will provide detailed insight into the degree of social participation of children with cerebral palsy and the factors that would be a potential cause of their levels of participation, in regular and special primary schools, in different socioeconomic environments in Czech Republic.Keywords: cerebral palsy, Czech republic, social participation, the school function assessment
Procedia PDF Downloads 3617622 Stability Analysis of Two-delay Differential Equation for Parkinson's Disease Models with Positive Feedback
Authors: M. A. Sohaly, M. A. Elfouly
Abstract:
Parkinson's disease (PD) is a heterogeneous movement disorder that often appears in the elderly. PD is induced by a loss of dopamine secretion. Some drugs increase the secretion of dopamine. In this paper, we will simply study the stability of PD models as a nonlinear delay differential equation. After a period of taking drugs, these act as positive feedback and increase the tremors of patients, and then, the differential equation has positive coefficients and the system is unstable under these conditions. We will present a set of suggested modifications to make the system more compatible with the biodynamic system. When giving a set of numerical examples, this research paper is concerned with the mathematical analysis, and no clinical data have been used.Keywords: Parkinson's disease, stability, simulation, two delay differential equation
Procedia PDF Downloads 1307621 Selection of Variogram Model for Environmental Variables
Authors: Sheikh Samsuzzhan Alam
Abstract:
The present study investigates the selection of variogram model in analyzing spatial variations of environmental variables with the trend. Sometimes, the autofitted theoretical variogram does not really capture the true nature of the empirical semivariogram. So proper exploration and analysis are needed to select the best variogram model. For this study, an open source data collected from California Soil Resource Lab1 is used to explain the problems when fitting a theoretical variogram. Five most commonly used variogram models: Linear, Gaussian, Exponential, Matern, and Spherical were fitted to the experimental semivariogram. Ordinary kriging methods were considered to evaluate the accuracy of the selected variograms through cross-validation. This study is beneficial for selecting an appropriate theoretical variogram model for environmental variables.Keywords: anisotropy, cross-validation, environmental variables, kriging, variogram models
Procedia PDF Downloads 3347620 Chemometric Analysis of Raw Milk Quality Originating from Conventional and Organic Dairy Farming in AP Vojvodina, Serbia
Authors: Sanja Podunavac-Kuzmanović, Denis Kučević, Strahinja Kovačević, Milica Karadžić, Lidija Jevrić
Abstract:
The present study describes the application of chemometric methods in analysis of milk samples which were collected in a conventional dairy farm and an organic dairy farm in AP Vojvodina, Republic of Serbia. The chemometric analysis included the application of univariate regression modeling and Analysis of Variance (ANOVA) method. The ANOVA was used in order to determine the differences in fatty acids content in the milk samples from conventional and organic farm. The results of the ANOVA testing indicate that there is a highly statistically significant difference between the content of fatty acid (saturated fatty acid vs. unsaturated fatty acids) in different dairy farming. Besides, the linear univariate models have been obtained as a result of modeling the linear relationships between the milk fat content and saturated fatty acids content, and the linear relationships between the milk fat content and unsaturated fatty acids content. The models obtained on the basis of the milk samples which originate from the organic farming are statistically better than the models based on the milk samples from conventional farming.Keywords: hemometrics, milk, organic farming, quality control
Procedia PDF Downloads 2377619 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models
Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales
Abstract:
The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.Keywords: concrete bridges, deterioration, Markov chains, probability matrix
Procedia PDF Downloads 3367618 Using Textual Pre-Processing and Text Mining to Create Semantic Links
Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo
Abstract:
This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.Keywords: semantic links, data mining, linked data, SKOS
Procedia PDF Downloads 1797617 Analogy to Continental Divisions: An Attention-Grabbing Approach to Teach Taxonomic Hierarchy to Students
Authors: Sagheer Ahmad
Abstract:
Teaching is a sacred profession whereby students are developed in their mental abilities to cope with the challenges of the remote world. Thinkers have developed plenty of interesting ways to make the learning process quick and absorbing for the students. However, third world countries are still lacking these remote facilities in the institutions, and therefore, teaching is totally dependent upon the skills of the teachers. Skillful teachers use self-devised and stimulating ideas to grab the attention of their students. Most of the time their ideas are based on local grounds with which the students are already familiar. This self-explanatory characteristic is the base of several local ideologies to disseminate scientific knowledge to new generations. Biology is such a subject which largely bases upon hypotheses, and teaching it in an interesting way is needful to create a friendly relationship between teacher and student, and to make a fantastic learning environment. Taxonomic classification if presented as it is, may not be attractive for the secondary school students who just start learning about biology at elementary levels. Presenting this hierarchy by exemplifying Kingdom, Phylum, Class, Order, family, genus and Species as comparatives of our division into continents, countries, cities, towns, villages, homes and finally individuals could be an attention-grabbing approach to make this concept get into bones of students. Similarly, many other interesting approaches have also been adopted to teach students in a fascinating way so that learning science subjects may not be boring for them. Discussing these appealing ways of teaching students can be a valuable stimulus to refine teaching methodologies about science, thereby promoting the concept of friendly learning.Keywords: biology, innovative approaches, taxonomic classification, teaching
Procedia PDF Downloads 2507616 Combining Corpus Linguistics and Critical Discourse Analysis to Study Power Relations in Hindi Newspapers
Authors: Vandana Mishra, Niladri Sekhar Dash, Jayshree Charkraborty
Abstract:
This present paper focuses on the application of corpus linguistics techniques for critical discourse analysis (CDA) of Hindi newspapers. While Corpus linguistics is the study of language as expressed in corpora (samples) of 'real world' text, CDA is an interdisciplinary approach to the study of discourse that views language as a form of social practice. CDA has mainly been studied from a qualitative perspective. However, we can say that recent studies have begun combining corpus linguistics with CDA in analyzing large volumes of text for the study of existing power relations in society. The corpus under our study is also of a sizable amount (1 million words of Hindi newspaper texts) and its analysis requires an alternative analytical procedure. So, we have combined both the quantitative approach i.e. the use of corpus techniques with CDA’s traditional qualitative analysis. In this context, we have focused on the Keyword Analysis Sorting Concordance Lines of the selected Keywords and calculating collocates of the keywords. We have made use of the Wordsmith Tool for all these analysis. The analysis starts with identifying the keywords in the political news corpus when compared with the main news corpus. The keywords are extracted from the corpus based on their keyness calculated through statistical tests like chi-squared test and log-likelihood test on the frequent words of the corpus. Some of the top occurring keywords are मोदी (Modi), भाजपा (BJP), कांग्रेस (Congress), सरकार (Government) and पार्टी (Political party). This is followed by the concordance analysis of these keywords which generates thousands of lines but we have to select few lines and examine them based on our objective. We have also calculated the collocates of the keywords based on their Mutual Information (MI) score. Both concordance and collocation help to identify lexical patterns in the political texts. Finally, all these quantitative results derived from the corpus techniques will be subjectively interpreted in accordance to the CDA’s theory to examine the ways in which political news discourse produces social and political inequality, power abuse or domination.Keywords: critical discourse analysis, corpus linguistics, Hindi newspapers, power relations
Procedia PDF Downloads 2247615 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries
Authors: Nessreen Y. Ibrahim
Abstract:
Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.Keywords: developing countries, economic models, future design, possible futures
Procedia PDF Downloads 267