Search results for: profile-based features
2942 A Task Scheduling Algorithm in Cloud Computing
Authors: Ali Bagherinia
Abstract:
Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization, that simulation results in CloudSim simulator prove this.Keywords: cloud computing, task scheduling, virtualization, SLA
Procedia PDF Downloads 3992941 Load-Deflecting Characteristics of a Fabricated Orthodontic Wire with 50.6Ni 49.4Ti Alloy Composition
Authors: Aphinan Phukaoluan, Surachai Dechkunakorn, Niwat Anuwongnukroh, Anak Khantachawana, Pongpan Kaewtathip, Julathep Kajornchaiyakul, Peerapong Tua-Ngam
Abstract:
Aims: The objectives of this study was to determine the load-deflecting characteristics of a fabricated orthodontic wire with alloy composition of 50.6% (atomic weight) Ni and 49.4% (atomic weight) Ti and to compare the results with Ormco, a commercially available pre-formed NiTi orthodontic archwire. Materials and Methods: The ingots alloys with atomic weight ratio 50.6 Ni: 49.4 Ti alloy were used in this study. Three specimens were cut to have wire dimensions of 0.016 inch x0.022 inch. For comparison, a commercially available pre-formed NiTi archwire, Ormco, with dimensions of 0.016 inch x 0.022 inch was used. Three-point bending tests were performed at the temperature 36+1 °C using a Universal Testing Machine on the newly fabricated and commercial archwires to assess the characteristics of the load-deflection curve with loading and unloading forces. The loading and unloading features at the deflection points 0.25, 0.50, 0.75. 1.0, 1.25, and 1.5 mm were compared. Descriptive statistics was used to evaluate each variables, and independent t-test at p < 0.05 was used to analyze the mean differences between the two groups. Results: The load-deflection curve of the 50.6Ni: 49.4Ti wires exhibited the characteristic features of superelasticity. The curves at the loading and unloading slope of Ormco NiTi archwire were more parallel than the newly fabricated NiTi wires. The average deflection force of the 50.6Ni: 49.4Ti wire was 304.98 g and 208.08 g for loading and unloading, respectively. Similarly, the values were 358.02 g loading and 253.98 g for unloading of Ormco NiTi archwire. The interval difference forces between each deflection points were in the range 20.40-121.38 g and 36.72-92.82 g for the loading and unloading curve of 50.6Ni: 49.4Ti wire, respectively, and 4.08-157.08 g and 14.28-90.78 g for the loading and unloading curve of commercial wire, respectively. The average deflection force of the 50.6Ni: 49.4Ti wire was less than that of Ormco NiTi archwire, which could have been due to variations in the wire dimensions. Although a greater force was required for each deflection point of loading and unloading for the 50.6Ni: 49.4Ti wire as compared to Ormco NiTi archwire, the values were still within the acceptable limits to be clinically used in orthodontic treatment. Conclusion: The 50.6Ni: 49.4Ti wires presented the characteristics of a superelastic orthodontic wire. The loading and unloading force were also suitable for orthodontic tooth movement. These results serve as a suitable foundation for further studies in the development of new orthodontic NiTi archwires.Keywords: 50.6 ni 49.4 Ti alloy wire, load deflection curve, loading and unloading force, orthodontic
Procedia PDF Downloads 3002940 Assessing Circularity Potentials and Customer Education to Drive Ecologically and Economically Effective Materials Design for Circular Economy - A Case Study
Authors: Mateusz Wielopolski, Asia Guerreschi
Abstract:
Circular Economy, as the counterargument to the ‘make-take-dispose’ linear model, is an approach that includes a variety of schools of thought looking at environmental, economic, and social sustainability. This, in turn, leads to a variety of strategies and often confusion when it comes to choosing the right one to make a circular transition as effective as possible. Due to the close interplay of circular product design, business model and social responsibility, companies often struggle to develop strategies that comply with all three triple-bottom-line criteria. Hence, to transition to circularity effectively, product design approaches must become more inclusive. In a case study conducted with the University of Bayreuth and the ISPO, we correlated aspects of material choice in product design, labeling and technological innovation with customer preferences and education about specific material and technology features. The study revealed those attributes of the consumers’ environmental awareness that directly translate into an increase of purchase power - primarily connected with individual preferences regarding sports activity and technical knowledge. Based on this outcome, we constituted a product development approach that incorporates the consumers’ individual preferences towards sustainable product features as well as their awareness about materials and technology. It allows deploying targeted customer education campaigns to raise the willingness to pay for sustainability. Next, we implemented the customer preference and education analysis into a circularity assessment tool that takes into account inherent company assets as well as subjective parameters like customer awareness. The outcome is a detailed but not cumbersome scoring system, which provides guidance for material and technology choices for circular product design while considering business model and communication strategy to the attentive customers. By including customer knowledge and complying with corresponding labels, companies develop more effective circular design strategies, while simultaneously increasing customers’ trust and loyalty.Keywords: circularity, sustainability, product design, material choice, education, awareness, willingness to pay
Procedia PDF Downloads 1982939 A Political-Economic Analysis of Next Generation EU Recovery Fund
Authors: Fernando Martín-Espejo, Christophe Crombez
Abstract:
This paper presents a political-economic analysis of the reforms introduced during the coronavirus crisis at the EU level with a special emphasis on the recovery fund Next Generation EU (NGEU). It also introduces a spatial model to evaluate whether the governmental features of the recovery fund can be framed inside the community method. Particularly, by evaluating the brake clause in the NGEU legislation, this paper analyses theoretically the political and legislative implications of the introduction of flexibility clauses in the EU decision-making process.Keywords: EU, legislative procedures, spatial model, coronavirus
Procedia PDF Downloads 1762938 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 522937 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 742936 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 922935 Photocatalytic Eco-Active Ceramic Slabs to Abate Air Pollution under LED Light
Authors: Claudia L. Bianchi, Giuseppina Cerrato, Federico Galli, Federica Minozzi, Valentino Capucci
Abstract:
At the beginning of the industrial productions, porcelain gres tiles were considered as just a technical material, aesthetically not very beautiful. Today thanks to new industrial production methods, both properties, and beauty of these materials completely fit the market requests. In particular, the possibility to prepare slabs of large sizes is the new frontier of building materials. Beside these noteworthy architectural features, new surface properties have been introduced in the last generation of these materials. In particular, deposition of TiO₂ transforms the traditional ceramic into a photocatalytic eco-active material able to reduce polluting molecules present in air and water, to eliminate bacteria and to reduce the surface dirt thanks to the self-cleaning property. The problem of photocatalytic materials resides in the fact that it is necessary a UV light source to activate the oxidation processes on the surface of the material, processes that are turned off inexorably when the material is illuminated by LED lights and, even more so, when we are in darkness. First, it was necessary a thorough study change the existing plants to deposit the photocatalyst very evenly and this has been done thanks to the advent of digital printing and the development of an ink custom-made that stabilizes the powdered TiO₂ in its formulation. In addition, the commercial TiO₂, which is used for the traditional photocatalytic coating, has been doped with metals in order to activate it even in the visible region and thus in the presence of sunlight or LED. Thanks to this active coating, ceramic slabs are able to purify air eliminating odors and VOCs, and also can be cleaned with very soft detergents due to the self-cleaning properties given by the TiO₂ present at the ceramic surface. Moreover, the presence of dopant metals (patent WO2016157155) also allows the material to work as well as antibacterial in the dark, by eliminating one of the negative features of photocatalytic building materials that have so far limited its use on a large scale. Considering that we are constantly in contact with bacteria, some of which are dangerous for health. Active tiles are 99,99% efficient on all bacteria, from the most common such as Escherichia coli to the most dangerous such as Staphilococcus aureus Methicillin-resistant (MRSA). DIGITALIFE project LIFE13 ENV/IT/000140 – award for best project of October 2017.Keywords: Ag-doped microsized TiO₂, eco-active ceramic, photocatalysis, digital coating
Procedia PDF Downloads 2272934 Recognition of Tifinagh Characters with Missing Parts Using Neural Network
Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui
Abstract:
In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN
Procedia PDF Downloads 3332933 About the Number of Fundamental Physical Interactions
Authors: Andrey Angorsky
Abstract:
In the article an issue about the possible number of fundamental physical interactions is studied. The theory of similarity on the dimensionless quantity as the damping ratio serves as the instrument of analysis. The structure with the features of Higgs field comes out from non-commutative expression for this ratio. The experimentally checked up supposition about the nature of dark energy is spoken out.Keywords: damping ratio, dark energy, dimensionless quantity, fundamental physical interactions, Higgs field, non-commutative expression
Procedia PDF Downloads 1402932 Google Translate: AI Application
Authors: Shaima Almalhan, Lubna Shukri, Miriam Talal, Safaa Teskieh
Abstract:
Since artificial intelligence is a rapidly evolving topic that has had a significant impact on technical growth and innovation, this paper examines people's awareness, use, and engagement with the Google Translate application. To see how familiar aware users are with the app and its features, quantitative and qualitative research was conducted. The findings revealed that consumers have a high level of confidence in the application and how far people they benefit from this sort of innovation and how convenient it makes communication.Keywords: artificial intelligence, google translate, speech recognition, language translation, camera translation, speech to text, text to speech
Procedia PDF Downloads 1532931 Design of Broadband Power Divider for 3G and 4G Applications
Authors: A. M. El-Akhdar, A. M. El-Tager, H. M. El-Hennawy
Abstract:
This paper presents a broadband power divider with equal power division ratio. Two sections of transmission line transformers based on coupled microstrip lines are applied to obtain broadband performance. In addition, design methodology is proposed for the novel structure. A prototype is designed, simulated to operate in the band from 2.1 to 3.8 GHz to fulfill the requirements of 3G and 4G applications. The proposed structure features reduced size and less resistors than other conventional techniques. Simulation verifies the proposed idea and design methodology.Keywords: power dividers, coupled lines, microstrip, 4G applications
Procedia PDF Downloads 4752930 A Semantic and Concise Structure to Represent Human Actions
Authors: Tobias Strübing, Fatemeh Ziaeetabar
Abstract:
Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis
Procedia PDF Downloads 1242929 Artificial Intelligence and Development: The Missing Link
Authors: Driss Kettani
Abstract:
ICT4D actors are naturally attempted to include AI in the range of enabling technologies and tools that could support and boost the Development process, and to refer to these as AI4D. But, doing so, assumes that AI complies with the very specific features of ICT4D context, including, among others, affordability, relevance, openness, and ownership. Clearly, none of these is fulfilled, and the enthusiastic posture that AI4D is a natural part of ICT4D is not grounded and, to certain extent, does not serve the purpose of Technology for Development at all. In the context of Development, it is important to emphasize and prioritize ICT4D, in the national digital transformation strategies, instead of borrowing "trendy" waves of the IT Industry that are motivated by business considerations, with no specific care/consideration to Development.Keywords: AI, ICT4D, technology for development, position paper
Procedia PDF Downloads 842928 NanoFrazor Lithography for advanced 2D and 3D Nanodevices
Authors: Zhengming Wu
Abstract:
NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits
Procedia PDF Downloads 712927 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1822926 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.Keywords: classification, achine learning, predictive quality, feature selection
Procedia PDF Downloads 1612925 Great Art for Little Children - Games in School Education as Integration of Polish-Language, Eurhythmics, Artistic and Mathematical Subject Matter
Authors: Małgorzata Anna Karczmarzyk
Abstract:
Who is the contemporary child? What are his/her distinctive features making him/her different from earlier generations? And how to teach in the dissimilar social reality? These questions will constitute the key to my reflections on contemporary early school education. For, to my mind, games have become highly significant for the modern model of education. There arise publications and research employing games to increase competence both in business, tutoring, or coaching, as well as in academic education . Thanks to games students and subordinates can be taught such abilities as problem thinking, creativity, consistent fulfillment of goals, resourcefulness and skills of communication.Keywords: games, art, children, school education, integration
Procedia PDF Downloads 8532924 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique
Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan
Abstract:
In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.Keywords: power spectral density, 3D EEG model, brain balancing, kNN
Procedia PDF Downloads 4842923 Effects of Cellular Insulin Receptor Stimulators with Alkaline Water on Performance, Plasma Cholesterol, Glucose, Triglyceride Levels and Hatchability in Breeding Japanese Quail
Authors: Rabia Göçmen, Gülşah Kanbur, Sinan Sefa Parlat
Abstract:
Aim of this study is to determine the effects of cellular insulin receptor stimulators on performance, plasma glucose, high density lipoprotein (HDL), low density lipoprotein (LDL), total cholesterol, triglyceride, triiodothyronine (T3) and thyroxine (T4) hormone levels, and incubation features in the breeding Japanese quails (Coturnix japonica). In the study, a total of 84 breeding quails was used, 6 weeks’ age, 24 are male and 60, female. Rations used in experiment are 2900 kcal/kg metabolic energy and 20% crude protein. Water pH is calibrated to 7.45. Ration and water were administered ad-libitum to the animals. As metformin source, metformin-HCl was used and as chrome resource, chromium picolinate was used. Trial groups were formed as control group (basal ration), metformin group (basal ration, added metformin at the level of feed of 20 mg/kg), and chromium picolinate (basal ration, added feed of 1500 ppb Cr) group. When regarded to the results of performance at the end of experiment, it is seen that live weight gain, feed consumption, egg weight, feed conversion ratio (Feed consumption/ egg weight), and egg production were affected at the significant level (p < 0.05). When the results are evaluated in terms of incubation features, hatchability and hatchability of fertile egg ratio were not affected from the treatments. Fertility ratio was significantly affected by metformin and chromium picolinate treatments and fertility rose at the significant level compared to control group (p < 0.05). According to results of experiment, plasma glucose level was not affected by metformin and chromium picolinate treatments. Plasma, total cholesterol, HDL, LDL, and triglyceride levels were significantly affected from insulin receptor stimulators added to ration (p < 0.05). Hormone level of Plasma T3 and T4 were also affected at the significant level from insulin receptor stimulators added to ration (p < 0.05).Keywords: chromium picolinate, cholesterol, hormone, metformin, quail
Procedia PDF Downloads 2182922 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 1582921 Automatic Checkpoint System Using Face and Card Information
Authors: Kriddikorn Kaewwongsri, Nikom Suvonvorn
Abstract:
In the deep south of Thailand, checkpoints for people verification are necessary for the security management of risk zones, such as official buildings in the conflict area. In this paper, we propose an automatic checkpoint system that verifies persons using information from ID cards and facial features. The methods for a person’s information abstraction and verification are introduced based on useful information such as ID number and name, extracted from official cards, and facial images from videos. The proposed system shows promising results and has a real impact on the local society.Keywords: face comparison, card recognition, OCR, checkpoint system, authentication
Procedia PDF Downloads 3202920 The Role of Strategic Metals in Cr-Al-Pt-V Composition of Protective Bond Coats
Authors: A. M. Pashayev, A. S. Samedov, T. B. Usubaliyev, N. Sh. Yusifov
Abstract:
Different types of coating technologies are widely used for gas turbine blades. Thermal barrier coatings, consisting of ceramic top coat, thermally grown oxide and a metallic bond coat are used in applications for thermal protection of hot section components in gas turbine engines. Operational characteristics and longevity of high-temperature turbine blades substantially depend on a right choice of composition of the protective thermal barrier coatings. At a choice of composition of a coating and content of the basic elements it is necessary to consider following factors, as minimum distinctions of coefficients of thermal expansions of elements, level of working temperatures and composition of the oxidizing environment, defining the conditions for the formation of protective layers, intensity of diffusive processes and degradation speed of protective properties of elements, extent of influence on the fatigue durability of details during operation, using of elements with high characteristics of thermal stability and satisfactory resilience of gas corrosion, density, hardness, thermal conduction and other physical characteristics. Forecasting and a choice of a thermal barrier coating composition, all above factors at the same time cannot be considered, as some of these characteristics are defined by experimental studies. The implemented studies and investigations show that one of the main failures of coatings used on gas turbine blades is related to not fully taking the physical-chemical features of elements into consideration during the determination of the composition of alloys. It leads to the formation of more difficult spatial structure, composition which also changes chaotically in some interval of concentration that doesn't promote thermal and structural firmness of a coating. For the purpose of increasing the thermal and structural resistant of gas turbine blade coatings is offered a new approach to forecasting of composition on the basis of analysis of physical-chemical characteristics of alloys taking into account the size factor, electron configuration, type of crystal lattices and Darken-Gurry method. As a result, of calculations and experimental investigations is offered the new four-component metallic bond coat on the basis of chrome for the gas turbine blades.Keywords: gas turbine blades, thermal barrier coating, metallic bond coat, strategic metals, physical-chemical features
Procedia PDF Downloads 3142919 From the Local to the Global: New Terrorism
Authors: Shamila Ahmed
Abstract:
The paper examines how the fluidity between the local level and the global level is an intrinsic feature of new terrorism. Through using cosmopolitanism, the narratives of the two opposing sides of ISIS and the ‘war on terrorism’ response are explored. It is demonstrated how the fluidity between these levels facilitates the radicalisation process through exploring how groups such as ISIS highlight the perceived injustices against Muslims locally and globally and therefore exploit the globalisation process which has reduced the space between these levels. Similarly, it is argued that the ‘war on terror’ involves the intersection of fear, security, threat, risk and social control as features of both the international ‘war on terror’ and intra state policies.Keywords: terrorism, war on terror, cosmopolitanism, global level terrorism
Procedia PDF Downloads 5822918 A Resolution on Ideal University Teachers Perspective of Turkish Students
Authors: Metin Özkan
Abstract:
In the last decade, Turkish higher education has been expanded dramatically. With this expansion, Turkey has come a long way in establishing an efficient system of higher education which is moving into a ‘mass’ system with institutions spanning the whole country. This expansion as a quantitative target leads to questioning the quality of higher education services. Especially, the qualities of higher education services depend on mainly quality of educators. Qualities of educators are most important in Turkish higher education system due to rapid rise in the number of universities and students. Therefore, it is seen important that reveals the portrait of ideal university teacher from the point of view student enrolled in Turkish higher education system. The purpose of this current study is to determine the portrait of ideal university teacher according to the views of Turkish Students. This research is carried out with descriptive scanning method and combined and mixed of qualitative and quantitative methodologies. Research data of qualitative section were collected at Gaziantep University with the participation of 45 students enrolled in 15 different faculties. Quantitative section was performed on 217 students. The data were obtained through semi-structured interview and “Ideal University Teacher Assessment” form developed by the researcher. The interview form consists of basically two parts. The first part of the interview was about personal information, the second part included questions about the characteristic of ideal university teacher. The questions which constitute the second part of the interview are; "what is a good university teacher like?” and “What human qualities and professional skills should a university teacher have? ". Assessment form which was created from the qualitative data obtained from interviews was used to attain scaling values for pairwise comparison and ranking judgment. According to study results, it has been found that ideal university teacher characteristics include the features like patient, tolerant, comprehensive and tolerant. Ideal university teacher, besides, implement the teaching methods like encouraging the students’ critical thinking, accepting the students’ recommendations on how to conduct the lesson and making use of the new technologies etc. Motivating and respecting the students, adopting a participative style, adopting a sincere way of manner also constitute the ideal university features relationships with students.Keywords: faculty, higher education, ideal university teacher, teacher behavior
Procedia PDF Downloads 2072917 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 2882916 New Approach in Sports Management of Great Sports Events
Authors: Taieb Kherafa Noureddine
Abstract:
The paper presents a new approach regarding the management in sports that is based on the principles of reengineering. Applying that modern and pure management system, called reengineering, in sports activity, we hope to get better and better results, in order to increase both the health state and the performances of trained athletes. The paper also presents the similarities between BPR (Business Process Reengineering) and sports managements, as well as the proposed solution for a proper implementation of such model of management. The five components of the basic BPR model are presented, together with their features for sports management.Keywords: business process reengineering, great sports events, sports management, training activities
Procedia PDF Downloads 4912915 Imaging Features of Hepatobiliary Histiocytosis
Authors: Ayda Youssef, Tarek Rafaat, Iman zaky
Abstract:
Purpose: Langerhans’ cell histiocytosis (LCH) is not uncommon pathology that implies aberrant proliferation of a specific dendritic (Langerhans) cell. These atypical but mature cells of monoclonal origin can infiltrate many sites of the body and may occur as localized lesions or as widespread systemic disease. Liver is one of the uncommon sites of affection. The twofold objective of this study is to illustrate the radiological presentation of this disease, and to compare these results with previously reported series. Methods and Materials: Between 2007 and 2012, 150 patients with biopsy-proven LCH were treated in our hospital, a paediatric cancer tertiary care center. A retrospective review of radiographic images and reports was performed. There were 33 patients with liver affection are stratified. All patients underwent imaging studies, mostly US and CT. A chart review was performed to obtain demographic, clinical and radiological data. They were analyzed and compared to other published series. Results: Retrospective assessment of 150 patients with LCH was performed, among them 33 patients were identified who had liver involvement. All these patients developed multisystemic disease; They were 12 females and 21 males with (n= 32), seven of them had marked hepatomegaly. Diffuse hypodense liver parenchyma was encountered in five cases, the periportal location has a certain predilection in cases of focal affection where three cases has a hypodense periportal soft tissue sheets, one of them associated with dilated biliary radicals, only one case has multiple focal lesions unrelated to portal tracts. On follow up of the patients, two cases show abnormal morphology of liver with bossy outline. Conclusion: LCH is a not infrequent disease. A high-index suspicion should be raised in the context of diagnosis of liver affection. A biopsy is recommended in the presence of radiological suspicion. Chemotherapy is the preferred therapeutic modality. Liver histiocytosis are not disease specific features but should be interpreted in conjunction with the clinical history and the results of biopsy. Clinical Relevance/Application: Radiologist should be aware of different patterns of hepatobiliary histiocytosis, Thus early diagnosis and proper management of patient can be conducted.Keywords: langerhans’ cell histiocytosis, liver, medical and health sciences, radiology
Procedia PDF Downloads 2812914 Management and Marketing Implications of Tourism Gravity Models
Authors: Clive L. Morley
Abstract:
Gravity models and panel data modelling of tourism flows are receiving renewed attention, after decades of general neglect. Such models have quite different underpinnings from conventional demand models derived from micro-economic theory. They operate at a different level of data and with different theoretical bases. These differences have important consequences for the interpretation of the results and their policy and managerial implications. This review compares and contrasts the two model forms, clarifying the distinguishing features and the estimation requirements of each. In general, gravity models are not recommended for use to address specific management and marketing purposes.Keywords: gravity models, micro-economics, demand models, marketing
Procedia PDF Downloads 4362913 Development of a New Device for Bending Fatigue Testing
Authors: B. Mokhtarnia, M. Layeghi
Abstract:
This work presented an original bending fatigue-testing setup for fatigue characterization of composite materials. A three-point quasi-static setup was introduced that was capable of applying stress control load in different loading waveforms, frequencies, and stress ratios. This setup was equipped with computerized measuring instruments to evaluate fatigue damage mechanisms. A detailed description of its different parts and working features was given, and dynamic analysis was done to verify the functional accuracy of the device. Feasibility was validated successfully by conducting experimental fatigue tests.Keywords: bending fatigue, quasi-static testing setup, experimental fatigue testing, composites
Procedia PDF Downloads 131