Search results for: calculation tasks
72 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia
Authors: Nino Paresashvili, Nino Abesadze
Abstract:
The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.
Keywords: Unemployment. analysis, methods, tendencies, regulation mechanisms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77771 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16570 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm
Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou
Abstract:
Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and Weight on Bit (WOB) used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036 m3/h and -2.374 m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. The best combination of funnel viscosity, final shear force and drilling time is obtained through quantitative calculation. The minimum loss rate of lost circulation wells in Shunbei area is 10 m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.
Keywords: Drilling fluid, loss rate, main controlling factors, Unmanned Intervention Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40169 Induced Affectivity and Impact on Creativity: Personal Growth and Perceived Adjustment when Narrating an Intense Emotional Experience
Authors: S. Da Costa, D. Páez, F. Sánchez
Abstract:
We examine the causal role of positive affect on creativity, the association of creativity or innovation in the ideation phase with functional emotional regulation, successful adjustment to stress and dispositional emotional creativity, as well as the predictive role of creativity for positive emotions and social adjustment. The study examines the effects of modification of positive affect on creativity. Participants write three poems, narrate an infatuation episode, answer a scale of personal growth after this episode and perform a creativity task, answer a flow scale after creativity task and fill a dispositional emotional creativity scale. High and low positive effect was induced by asking subjects to write three poems about high and low positive connotation stimuli. In a neutral condition, tasks were performed without previous affect induction. Subjects on the condition of high positive affect report more positive and less negative emotions, more personal growth (effect size r = .24) and their last poem was rated as more original by judges (effect size r = .33). Mediational analysis showed that positive emotions explain the influence of the manipulation on personal growth - positive affect correlates r = .33 to personal growth. The emotional creativity scale correlated to creativity scores of the creative task (r = .14), to the creativity of the narration of the infatuation episode (r = .21). Emotional creativity was also associated, during performing the creativity task, with flow (r = .27) and with affect balance (r = .26). The mediational analysis showed that emotional creativity predicts flow through positive affect. Results suggest that innovation in the phase of ideation is associated with a positive affect balance and satisfactory performance, as well as dispositional emotional creativity is adaptive.
Keywords: Affectivity, creativity, induction, innovation, psychological factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61368 Capacity Enhancement for Agricultural Workers in Mangosteen Product
Authors: Cholpassorn Sitthiwarongchai, Chutikarn Sriviboon
Abstract:
The two primary objectives of this research were (1) to examine the current knowledge and actual circumstance of agricultural workers about mangosteen product processing; and (2) to analyze and evaluate ways to develop capacity of mangosteen product processing. The population of this study was 15,125 people who work in the agricultural sector, in this context, mangosteen production, in the eastern part of Thailand that included Chantaburi Province, Rayong Province, Trad Province and Pracheenburi Province. The sample size based on Yamane’s calculation with 95% reliability was therefore 392 samples. Mixed method was employed included questionnaire and focus group discussion with Connoisseurship Model used in order to collect quantitative and qualitative data. Key informants were used in the focus group including agricultural business owners, academic people in agro food processing, local academics, local community development staff, OTOP subcommittee, and representatives of agro processing industry professional organizations. The study found that the majority of the respondents agreed with a high level (in five- rating scale) towards most of variables of knowledge management in agro food processing. The result of the current knowledge and actual circumstance of agricultural human resource in an arena of mangosteen product processing revealed that mostly, the respondents agreed at a high level to establish 7 variables. The guideline to developing the body of knowledge in order to enhance the capacity of the agricultural workers in mangosteen product processing was delivered in the focus group discussion. The discussion finally contributed to an idea to produce manuals for mangosteen product processing methods, with 4 products chosen: (1) mangosteen soap; (2) mangosteen juice; (3) mangosteen toffee; and (4) mangosteen preserves or jam.
Keywords: Capacity Enhancement, Agricultural Workers, Mangosteen Product Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190867 Simulation Data Management Approach for Developing Adaptronic Systems – The W-Model Methodology
Authors: Roland S. Nattermann, Reiner Anderl
Abstract:
Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.
Keywords: Adaptronic, Data-Management, LOEWE-CentreAdRIA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 236866 Lexical Based Method for Opinion Detection on Tripadvisor Collection
Authors: Faiza Belbachir, Thibault Schienhinski
Abstract:
The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.Keywords: Tripadvisor, Opinion detection, SentiWordNet, trust score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75065 Lightweight and Seamless Distributed Scheme for the Smart Home
Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro
Abstract:
Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.Keywords: Authentication, key-session, security, wireless sensors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87764 Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment
Authors: Amit Chhabra, Gurvinder Singh, Sandeep Singh Waraich, Bhavneet Sidhu, Gaurav Kumar
Abstract:
Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.Keywords: SLB, DLB, Host, Algorithm and Load.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165763 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: S. Golmohammadi, M. Noorian Bidgoli
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the Rock Quality Designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and Stress Reduction Factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has been attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the Rock Engineering System (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.
Keywords: Q-system, Rock Engineering System, statistical analysis, rock mass, tunnel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29562 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: Building stock energy modelling, energy-savings, archetype.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74761 Roll of Membership functions in Fuzzy Logic for Prediction of Shoot Length of Mustard Plant Based on Residual Analysis
Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri
Abstract:
The selection for plantation of a particular type of mustard plant depending on its productivity (pod yield) at the stage of maturity. The growth of mustard plant dependent on some parameters of that plant, these are shoot length, number of leaves, number of roots and roots length etc. As the plant is growing, some leaves may be fall down and some new leaves may come, so it can not gives the idea to develop the relationship with the seeds weight at mature stage of that plant. It is not possible to find the number of roots and root length of mustard plant at growing stage that will be harmful of this plant as roots goes deeper to deeper inside the land. Only the value of shoot length which increases in course of time can be measured at different time instances. Weather parameters are maximum and minimum humidity, rain fall, maximum and minimum temperature may effect the growth of the plant. The parameters of pollution, water, soil, distance and crop management may be dominant factors of growth of plant and its productivity. Considering all parameters, the growth of the plant is very uncertain, fuzzy environment can be considered for the prediction of shoot length at maturity of the plant. Fuzzification plays a greater role for fuzzification of data, which is based on certain membership functions. Here an effort has been made to fuzzify the original data based on gaussian function, triangular function, s-function, Trapezoidal and L –function. After that all fuzzified data are defuzzified to get normal form. Finally the error analysis (calculation of forecasting error and average error) indicates the membership function appropriate for fuzzification of data and use to predict the shoot length at maturity. The result is also verified using residual (Absolute Residual, Maximum of Absolute Residual, Mean Absolute Residual, Mean of Mean Absolute Residual, Median of Absolute Residual and Standard Deviation) analysis.Keywords: Fuzzification, defuzzification, gaussian function, triangular function, trapezoidal function, s-function, , membership function, residual analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 231960 Factors Affecting M-Government Deployment and Adoption
Authors: Saif Obaid Alkaabi, Nabil Ayad
Abstract:
Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.Keywords: E-government, m-government, system dependability, system security, trust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177259 Sustainability Reporting and Performances of the Companies in the Istanbul Stock Exchange Sustainability Index
Authors: Zeynep Şahin, Züleyha Yılmaz, Fikret Çankaya
Abstract:
In today's business world, in which it is difficult to survive, the economic life of products, services or knowledge is considerably reduced. Competitors produce similar products or extra-featured ones instantly. In this environment, the contribution of companies to the social and economic environment is a preferred criterion by consumers alongside products or services. Therefore, consumers need to obtain more detailed information about companies. Besides, this drastic change in the market encourages companies to become sustainable. Sustainable business means the company puts consumed products back. Corporate sustainability, corresponds to sustainability at the level of the company, and gives equal importance to company growth and profitability together with environmental and social issues. The BIST Sustainability Index started to be calculated by the Istanbul Stock Exchange (BIST) in 2014 to evaluate the sustainability performance of companies in Turkey. The main objective of this study is to present the importance of sustainability reports in Turkey. To this aim, the performances of 15 companies in the BIST Sustainability Index were compared the periods before and after entering the index. On the other hand, sustainability reporting practices should be encouraged to increase studies on this issue. In this context, to remain on the agenda of the issue is a further objective of this study. To achieve these objectives, the financial data of the companies in the period before and after entering to the BIST Sustainability Index were analyzed using t-test in Statistical Package for the Social Sciences (SPSS) package. The results of the study showed that no significant difference between the performances of the companies in terms of the net profit margin, the return on assets and equity capital in these periods could be found. Therefore, it can be said that insufficient importance is given to sustainability issues in Turkey. The reasons for this situation might be considered as a lack of awareness due to the recent introduction and calculation of the index. It is expected that the awareness of firms and investors about sustainability will increase, and that they will demonstrate the necessary importance to this issue over time.
Keywords: BIST sustainability index, firm performance, sustainability, sustainability reporting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94958 Spectral Mixture Model Applied to Cannabis Parcel Determination
Authors: Levent Basayigit, Sinan Demir, Yusuf Ucar, Burhan Kara
Abstract:
Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.
Keywords: Gaussian mixture discriminant analysis, spectral mixture model, World View-2, land parcels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 79957 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.
Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6956 An Ergonomic Evaluation of Three Load Carriage Systems for Reducing Muscle Activity of Trunk and Lower Extremities during Giant Puppet Performing Tasks
Authors: Cathy SW. Chow, Kristina Shin, Faming Wang, B. C. L. So
Abstract:
During some dynamic giant puppet performances, an ergonomically designed load carrier system is necessary for the puppeteers to carry a giant puppet body’s heavy load with minimum muscle stress. A load carrier (i.e. prototype) was designed with two small wheels on the foot; and a hybrid spring device on the knee in order to assist the sliding and knee bending movements respectively. Thus, the purpose of this study was to evaluate the effect of three load carriers including two other commercially available load mounting systems, Tepex and SuitX, and the prototype. Ten male participants were recruited for the experiment. Surface electromyography (sEMG) was used to collect the participants’ muscle activities during forward moving and bouncing and with and without load of 11.1 kg that was 60 cm above the shoulder. Five bilateral muscles including the lumbar erector spinae (LES), rectus femoris (RF), bicep femoris (BF), tibialis anterior (TA), and gastrocnemius (GM) were selected for data collection. During forward moving task, the sEMG data showed smallest muscle activities by Tepex harness which exhibited consistently the lowest, compared with the prototype and SuitX which were significantly higher on left LES 68.99% and 64.99%, right LES 26.57% and 82.45%; left RF 87.71% and 47.61%, right RF 143.57% and 24.28%; left BF 80.21% and 22.23%, right BF 96.02% and 21.83%; right TA 6.32% and 4.47%; left GM 5.89% and 12.35% respectively. The result above reflected mobility was highly restricted by tested exoskeleton devices. On the other hand, the sEMG data from bouncing task showed the smallest muscle activities by prototype which exhibited consistently the lowest, compared with the Tepex harness and SuitX which were significantly lower on lLES 6.65% and 104.93, rLES 23.56% and 92.19%; lBF 33.21% and 93.26% and rBF 24.70% and 81.16%; lTA 46.51% and 191.02%; rTA 12.75% and 125.76%; IGM 31.54% and 68.36%; rGM 95.95% and 96.43% respectively.Keywords: Exoskeleton, load carriage aid, giant puppet performers, electromyography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55355 Experimental Investigation of Visual Comfort Requirement in Garment Factories and Identify the Cost Saving Opportunities
Authors: M. A. Wijewardane, S. A. N. C. Sudasinghe, H. K. G. Punchihewa, W. K. D. L. Wickramasinghe, S. A. Philip, M. R. S. U. Kumara
Abstract:
Visual comfort is one of the major parameters that can be taken to measure the human comfort in any environment. If the provided illuminance level in a working environment does not meet the workers visual comfort, it will lead to eye-strain, fatigue, headache, stress, accidents and finally, poor productivity. However, improvements in lighting do not necessarily mean that the workplace requires more light. Unnecessarily higher illuminance levels will also cause poor visual comfort and health risks. In addition, more power consumption on lighting will also result in higher energy costs. So, during this study, visual comfort and the illuminance requirement for the workers in textile/apparel industry were studied to perform different tasks (i.e. cutting, sewing and knitting) at their workplace. Experimental studies were designed to identify the optimum illuminance requirement depending upon the varied fabric colour and type and finally, energy saving potentials due to controlled illuminance level depending on the workforce requirement were analysed. Visual performance of workers during the sewing operation was studied using the ‘landolt ring experiment’. It was revealed that around 36.3% of the workers would like to work if the illuminance level varies from 601 lux to 850 lux illuminance level and 45.9% of the workers are not happy to work if the illuminance level reduces less than 600 lux and greater than 850 lux. Moreover, more than 65% of the workers who do not satisfy with the existing illuminance levels of the production floors suggested that they have headache, eye diseases, or both diseases due to poor visual comfort. In addition, findings of the energy analysis revealed that the energy-saving potential of 5%, 10%, 24%, 8% and 16% can be anticipated for fabric colours, red, blue, yellow, black and white respectively, when the 800 lux is the prevailing illuminance level for sewing operation.
Keywords: Landolt ring experiment, lighting energy consumption, illuminance, textile and apparel industry, visual comfort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83054 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.
Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69853 Analysis and Design of Inductive Power Transfer Systems for Automotive Battery Charging Applications
Authors: Wahab Ali Shah, Junjia He
Abstract:
Transferring electrical power without any wiring has been a dream since late 19th century. There were some advances in this area as to know more about microwave systems. However, this subject has recently become very attractive due to their practiScal systems. There are low power applications such as charging the batteries of contactless tooth brushes or implanted devices, and higher power applications such as charging the batteries of electrical automobiles or buses. In the first group of applications operating frequencies are in microwave range while the frequency is lower in high power applications. In the latter, the concept is also called inductive power transfer. The aim of the paper is to have an overview of the inductive power transfer for electrical vehicles with a special concentration on coil design and power converter simulation for static charging. Coil design is very important for an efficient and safe power transfer. Coil design is one of the most critical tasks. Power converters are used in both side of the system. The converter on the primary side is used to generate a high frequency voltage to excite the primary coil. The purpose of the converter in the secondary is to rectify the voltage transferred from the primary to charge the battery. In this paper, an inductive power transfer system is studied. Inductive power transfer is a promising technology with several possible applications. Operation principles of these systems are explained, and components of the system are described. Finally, a single phase 2 kW system was simulated and results were presented. The work presented in this paper is just an introduction to the concept. A reformed compensation network based on traditional inductor-capacitor-inductor (LCL) topology is proposed to realize robust reaction to large coupling variation that is common in dynamic wireless charging application. In the future, this type compensation should be studied. Also, comparison of different compensation topologies should be done for the same power level.
Keywords: Coil design, contactless charging, electrical automobiles, inductive power transfer, operating frequency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98252 Integrating Dependent Material Planning Cycle into Building Information Management: A Building Information Management-Based Material Management Automation Framework
Authors: Faris Elghaish, Sepehr Abrishami, Mark Gaterell, Richard Wise
Abstract:
The collaboration and integration between all building information management (BIM) processes and tasks are necessary to ensure that all project objectives can be delivered. The literature review has been used to explore the state of the art BIM technologies to manage construction materials as well as the challenges which have faced the construction process using traditional methods. Thus, this paper aims to articulate a framework to integrate traditional material planning methods such as ABC analysis theory (Pareto principle) to analyse and categorise the project materials, as well as using independent material planning methods such as Economic Order Quantity (EOQ) and Fixed Order Point (FOP) into the BIM 4D, and 5D capabilities in order to articulate a dependent material planning cycle into BIM, which relies on the constructability method. Moreover, we build a model to connect between the material planning outputs and the BIM 4D and 5D data to ensure that all project information will be accurately presented throughout integrated and complementary BIM reporting formats. Furthermore, this paper will present a method to integrate between the risk management output and the material management process to ensure that all critical materials are monitored and managed under the all project stages. The paper includes browsers which are proposed to be embedded in any 4D BIM platform in order to predict the EOQ as well as FOP and alarm the user during the construction stage. This enables the planner to check the status of the materials on the site as well as to get alarm when the new order will be requested. Therefore, this will lead to manage all the project information in a single context and avoid missing any information at early design stage. Subsequently, the planner will be capable of building a more reliable 4D schedule by allocating the categorised material with the required EOQ to check the optimum locations for inventory and the temporary construction facilitates.
Keywords: Building information management, BIM, economic order quantity, fixed order point, BIM 4D, BIM 5D.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91151 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine
Authors: Hira Lal Gope, Hidekazu Fukai
Abstract:
The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.
Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155350 Named Entity Recognition using Support Vector Machine: A Language Independent Approach
Authors: Asif Ekbal, Sivaji Bandyopadhyay
Abstract:
Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.
Keywords: Named Entity (NE), Named Entity Recognition (NER), Support Vector Machine (SVM), Bengali, Hindi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 340349 A Game-Based Product Modelling Environment for Non-Engineer
Authors: Guolong Zhong, Venkatesh Chennam Vijay, Ilias Oraifige
Abstract:
In the last 20 years, Knowledge Based Engineering (KBE) has shown its advantages in product development in different engineering areas such as automation, mechanical, civil and aerospace engineering in terms of digital design automation and cost reduction by automating repetitive design tasks through capturing, integrating, utilising and reusing the existing knowledge required in various aspects of the product design. However, in primary design stages, the descriptive information of a product is discrete and unorganized while knowledge is in various forms instead of pure data. Thus, it is crucial to have an integrated product model which can represent the entire product information and its associated knowledge at the beginning of the product design. One of the shortcomings of the existing product models is a lack of required knowledge representation in various aspects of product design and its mapping to an interoperable schema. To overcome the limitation of the existing product model and methodologies, two key factors are considered. First, the product model must have well-defined classes that can represent the entire product information and its associated knowledge. Second, the product model needs to be represented in an interoperable schema to ensure a steady data exchange between different product modelling platforms and CAD software. This paper introduced a method to provide a general product model as a generative representation of a product, which consists of the geometry information and non-geometry information, through a product modelling framework. The proposed method for capturing the knowledge from the designers through a knowledge file provides a simple and efficient way of collecting and transferring knowledge. Further, the knowledge schema provides a clear view and format on the data that needed to be gathered in order to achieve a unified knowledge exchange between different platforms. This study used a game-based platform to make product modelling environment accessible for non-engineers. Further the paper goes on to test use case based on the proposed game-based product modelling environment to validate the effectiveness among non-engineers.
Keywords: Game-based learning, knowledge based engineering, product modelling, design automation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74248 Exploring Management of the Fuzzy Front End of Innovation in a Product Driven Startup Company
Authors: Dmitry K. Shaytan, Georgy D. Laptev
Abstract:
In our research we aimed to test a managerial approach for the fuzzy front end (FFE) of innovation by creating controlled experiment/ business case in a breakthrough innovation development. The experiment was in the sport industry and covered all aspects of the customer discovery stage from ideation to prototyping followed by patent application. In the paper we describe and analyze mile stones, tasks, management challenges, decisions made to create the break through innovation, evaluate overall managerial efficiency that was at the considered FFE stage. We set managerial outcome of the FFE stage as a valid product concept in hand. In our paper we introduce hypothetical construct “Q-factor” that helps us in the experiment to distinguish quality of FFE outcomes. The experiment simulated for entrepreneur the FFE of innovation and put on his shoulders responsibility for the outcome of valid product concept. While developing managerial approach to reach the outcome there was a decision to look on product concept from the cognitive psychology and cognitive science point of view. This view helped us to develop the profile of a person whose projection (mental representation) of a new product could optimize for a manager or entrepreneur FFE activities. In the experiment this profile was tested to develop breakthrough innovation for swimmers. Following the managerial approach the product concept was created to help swimmers to feel/sense water. The working prototype was developed to estimate the product concept validity and value added effect for customers. Based on feedback from coachers and swimmers there were strong positive effect that gave high value for customers, and for the experiment – the valid product concept being developed by proposed managerial approach for the FFE. In conclusions there is a suggestion of managerial approach that was derived from experiment.
Keywords: Concept development, concept testing, customer discovery, entrepreneurship, entrepreneurial management, idea generation, idea screening, startup management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185347 Implementing an Intuitive Reasoner with a Large Weather Database
Authors: Yung-Chien Sun, O. Grant Clark
Abstract:
In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138346 Performance Analysis of Organic Rankine Cycle Technology to Exploit Low-Grade Waste Heat to Power Generation in Indian Industry
Authors: Bipul Krishna Saha, Basab Chakraborty, Ashish Alex Sam, Parthasarathi Ghosh
Abstract:
The demand for energy is cumulatively increasing with time. Since the availability of conventional energy resources is dying out gradually, significant interest is being laid on searching for alternate energy resources and minimizing the wastage of energy in various fields. In such perspective, low-grade waste heat from several industrial sources can be reused to generate electricity. The present work is to further the adoption of the Organic Rankine Cycle (ORC) technology in Indian industrial sector. The present paper focuses on extending the previously reported idea to the next level through a comparative review with three different working fluids using practical data from an Indian industrial plant. For comprehensive study in the simulation platform of Aspen Hysys®, v8.6, the waste heat data has been collected from a current coke oven gas plant in India. A parametric analysis of non-regenerative ORC and regenerative ORC is executed using the working fluids R-123, R-11 and R-21 for subcritical ORC system. The primary goal is to determine the optimal working fluid considering various system parameters like turbine work output, obtained system efficiency, irreversibility rate and second law efficiency under applied multiple heat source temperature (160 °C- 180 °C). Selection of the turbo-expanders is one of the most crucial tasks for low-temperature applications in ORC system. The present work is an attempt to make suitable recommendation for the appropriate configuration of the turbine. In a nutshell, this study justifies the proficiency of integrating the ORC technology in Indian perspective and also finds the appropriate parameter of all components integrated in ORC system for building up an ORC prototype.
Keywords: Organic rankine cycle, regenerative organic rankine cycle, waste heat recovery, Indian industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 126945 Spatial Clustering Model of Vessel Trajectory to Extract Sailing Routes Based on AIS Data
Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin
Abstract:
The automatic extraction of shipping routes is advantageous for intelligent traffic management systems to identify events and support decision-making in maritime surveillance. At present, there is a high demand for the extraction of maritime traffic networks that resemble the real traffic of vessels accurately, which is valuable for further analytical processing tasks for vessels trajectories (e.g., naval routing and voyage planning, anomaly detection, destination prediction, time of arrival estimation). With the help of big data and processing huge amounts of vessels’ trajectory data, it is possible to learn these shipping routes from the navigation history of past behaviour of other, similar ships that were travelling in a given area. In this paper, we propose a spatial clustering model of vessels’ trajectories (SPTCLUST) to extract spatial representations of sailing routes from historical Automatic Identification System (AIS) data. The whole model consists of three main parts: data preprocessing, path finding, and route extraction, which consists of clustering and representative trajectory extraction. The proposed clustering method provides techniques to overcome the problems of: (i) optimal input parameters selection; (ii) the high complexity of processing a huge volume of multidimensional data; (iii) and the spatial representation of complete representative trajectory detection in the context of trajectory clustering algorithms. The experimental evaluation showed the effectiveness of the proposed model by using a real-world AIS dataset from the Port of Halifax. The results contribute to further understanding of shipping route patterns. This could aid surveillance authorities in stable and sustainable vessel traffic management.
Keywords: Vessel trajectory clustering, trajectory mining, Spatial Clustering, marine intelligent navigation, maritime traffic network extraction, sdailing routes extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45544 A Paradigm Shift towards Personalized and Scalable Product Development and Lifecycle Management Systems in the Aerospace Industry
Authors: David E. Culler, Noah D. Anderson
Abstract:
Integrated systems for product design, manufacturing, and lifecycle management are difficult to implement and customize. Commercial software vendors, including CAD/CAM and third party PDM/PLM developers, create user interfaces and functionality that allow their products to be applied across many industries. The result is that systems become overloaded with functionality, difficult to navigate, and use terminology that is unfamiliar to engineers and production personnel. For example, manufacturers of automotive, aeronautical, electronics, and household products use similar but distinct methods and processes. Furthermore, each company tends to have their own preferred tools and programs for controlling work and information flow and that connect design, planning, and manufacturing processes to business applications. This paper presents a methodology and a case study that addresses these issues and suggests that in the future more companies will develop personalized applications that fit to the natural way that their business operates. A functioning system has been implemented at a highly competitive U.S. aerospace tooling and component supplier that works with many prominent airline manufacturers around the world including The Boeing Company, Airbus, Embraer, and Bombardier Aerospace. During the last three years, the program has produced significant benefits such as the automatic creation and management of component and assembly designs (parametric models and drawings), the extensive use of lightweight 3D data, and changes to the way projects are executed from beginning to end. CATIA (CAD/CAE/CAM) and a variety of programs developed in C#, VB.Net, HTML, and SQL make up the current system. The web-based platform is facilitating collaborative work across multiple sites around the world and improving communications with customers and suppliers. This work demonstrates that the creative use of Application Programming Interface (API) utilities, libraries, and methods is a key to automating many time-consuming tasks and linking applications together.
Keywords: CAD/CAM, CAPP, PDM, PLM, Scalable Systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165243 Some Studies on Temperature Distribution Modeling of Laser Butt Welding of AISI 304 Stainless Steel Sheets
Authors: N. Siva Shanmugam, G. Buvanashekaran, K. Sankaranarayanasamy
Abstract:
In this research work, investigations are carried out on Continuous Wave (CW) Nd:YAG laser welding system after preliminary experimentation to understand the influencing parameters associated with laser welding of AISI 304. The experimental procedure involves a series of laser welding trials on AISI 304 stainless steel sheets with various combinations of process parameters like beam power, beam incident angle and beam incident angle. An industrial 2 kW CW Nd:YAG laser system, available at Welding Research Institute (WRI), BHEL Tiruchirappalli, is used for conducting the welding trials for this research. After proper tuning of laser beam, laser welding experiments are conducted on AISI 304 grade sheets to evaluate the influence of various input parameters on weld bead geometry i.e. bead width (BW) and depth of penetration (DOP). From the laser welding results, it is noticed that the beam power and welding speed are the two influencing parameters on depth and width of the bead. Three dimensional finite element simulation of high density heat source have been performed for laser welding technique using finite element code ANSYS for predicting the temperature profile of laser beam heat source on AISI 304 stainless steel sheets. The temperature dependent material properties for AISI 304 stainless steel are taken into account in the simulation, which has a great influence in computing the temperature profiles. The latent heat of fusion is considered by the thermal enthalpy of material for calculation of phase transition problem. A Gaussian distribution of heat flux using a moving heat source with a conical shape is used for analyzing the temperature profiles. Experimental and simulated values for weld bead profiles are analyzed for stainless steel material for different beam power, welding speed and beam incident angle. The results obtained from the simulation are compared with those from the experimental data and it is observed that the results of numerical analysis (FEM) are in good agreement with experimental results, with an overall percentage of error estimated to be within ±6%.
Keywords: Laser welding, Butt weld, 304 SS, FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4987