Search results for: supplier segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 575

Search results for: supplier segmentation

155 Evaluation and Selection of Contractors in Construction Projects with a View Supply Chain Management and Utilization of Promthee

Authors: Sara Najiazarpour, Mahsa Najiazarpour

Abstract:

There are many problems in contracting projects and their performance. At each project stage and due to different reasons, these problems affect cost, time and overall project quality. Hence, in order to increase the efficiency and performance in all levels of the chain and with supply chain management approach, there will be a coordination from the beginning of a project (contractor selection) to the end of project (handover of project). Contractor selection is the foremost part of construction projects which in this multi-criteria decision-making, the best contractor is determined by expert judgment, different variables and their priorities. In this paper for selecting the best contractor, numerous criteria were collected by asking from adept experts and then among them, 16 criteria with highest frequency were considered for questionnaire. This questionnaire was distributed between experts. Cronbach's alpha coefficient was obtained as 72%. Then based on Borda's function 12 important criteria was selected which was categorized in four main criteria and related sub-criteria as follow: Environmental factors and physical equipment: procurement and materials (supplier), company's machines, contractor’s proposed cost estimate - financial capacity: bank turnover and company's assets, the income of tax declaration in last year, Ability to compensate for losses or delays - past performance- records and technical expertise: experts and key personnel, the past technical backgrounds and experiences, employer satisfaction of previous contracts, the number of similar projects was done - standards: rank and field of expertise which company is qualified for and its validity, availability and number of permitted projects done. Then with PROMTHEE method, the criteria were normalized and monitored, finally the best alternative was selected. In this research, qualitative criteria of each company is became a quantitative criteria. Finally, information of some companies was evaluated and the best contractor was selected based on all criteria and their priorities.

Keywords: contractor evaluation and selection, project development, supply chain management, PROMTHEE method

Procedia PDF Downloads 42
154 The Trajectory of the Ball in Football Game

Authors: Mahdi Motahari, Mojtaba Farzaneh, Ebrahim Sepidbar

Abstract:

Tracking of moving and flying targets is one of the most important issues in image processing topic. Estimating of trajectory of desired object in short-term and long-term scale is more important than tracking of moving and flying targets. In this paper, a new way of identifying and estimating of future trajectory of a moving ball in long-term scale is estimated by using synthesis and interaction of image processing algorithms including noise removal and image segmentation, Kalman filter algorithm in order to estimating of trajectory of ball in football game in short-term scale and intelligent adaptive neuro-fuzzy algorithm based on time series of traverse distance. The proposed system attain more than 96% identify accuracy by using aforesaid methods and relaying on aforesaid algorithms and data base video in format of synthesis and interaction. Although the present method has high precision, it is time consuming. By comparing this method with other methods we realize the accuracy and efficiency of that.

Keywords: tracking, signal processing, moving targets and flying, artificial intelligent systems, estimating of trajectory, Kalman filter

Procedia PDF Downloads 437
153 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.

Keywords: business model, capacity, sourcing, synergies

Procedia PDF Downloads 150
152 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 125
151 Decision Making Approach through Generalized Fuzzy Entropy Measure

Authors: H. D. Arora, Anjali Dhiman

Abstract:

Uncertainty is found everywhere and its understanding is central to decision making. Uncertainty emerges as one has less information than the total information required describing a system and its environment. Uncertainty and information are so closely associated that the information provided by an experiment for example, is equal to the amount of uncertainty removed. It may be pertinent to point out that uncertainty manifests itself in several forms and various kinds of uncertainties may arise from random fluctuations, incomplete information, imprecise perception, vagueness etc. For instance, one encounters uncertainty due to vagueness in communication through natural language. Uncertainty in this sense is represented by fuzziness resulting from imprecision of meaning of a concept expressed by linguistic terms. Fuzzy set concept provides an appropriate mathematical framework for dealing with the vagueness. Both information theory, proposed by Shannon (1948) and fuzzy set theory given by Zadeh (1965) plays an important role in human intelligence and various practical problems such as image segmentation, medical diagnosis etc. Numerous approaches and theories dealing with inaccuracy and uncertainty have been proposed by different researcher. In the present communication, we generalize fuzzy entropy proposed by De Luca and Termini (1972) corresponding to Shannon entropy(1948). Further, some of the basic properties of the proposed measure were examined. We also applied the proposed measure to the real life decision making problem.

Keywords: entropy, fuzzy sets, fuzzy entropy, generalized fuzzy entropy, decision making

Procedia PDF Downloads 417
150 Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models

Authors: Aditya Karade, Sharada Falane, Dhananjay Deshmukh, Vijaykumar Mantri

Abstract:

Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%.

Keywords: brain tumour, MRI image, detecting and classifying tumour, pre-trained models, transfer learning, image segmentation, data augmentation

Procedia PDF Downloads 41
149 Corporate Social Responsibility for Multinational Enterprises to Gain Incomparable Advantage on the Long Run without Competition

Authors: Fatima Homor

Abstract:

The new era in business has started, according to my research paper findings, corporate social responsibility leads organizations to an incomparable advantage phase, where competition is secondary and financial growth is a result. Those who join later, lose their active advantage and cause passive disadvantage for their organizations. The main purpose of this presentation is to state the obvious and shed the light of the advantages of doing good, while doing well for multinational enterprises, extremely low fluctuation (preventing one of the highest costs), significantly lower marketing budget, enhanced reputation causing customer and supplier loyalty, employee commitment results in higher motivation level leading to better quality at each stages, Corporate Social Responsibility brings Unique Selling Proposition incomparable to others. The paper is based on a large research work conducted for the University of Liverpool Masters in Business Administration program, with the title of Corporate Social Responsibility for Multinational Enterprises to gain incomparable advantage. The research is based on both recent secondary data, but most importantly on 25 interviews with Chief Executive Officers at Multinational Enterprises and / or the Human Resources / corporate communications directors. The direct gains on Corporate Social Responsibility are analyzed when it is embedded into the core of the business. It is evident that project based Corporate Social Responsibility is not effective neither from the supported topic, Non-governmental Organizations point of view nor from the organization’s long-term sustainability point of view. Surveys have been conducted, data compared and consequences drawn. Corporate Social Responsibility must be started inside of the business to strengthen it. First, commit employees. It must come from the Chief Executive Officer. It must be related to the business profile. It has to be long term. They will commit customers. B-corps are coming (e.g. Unilever); the phenomenon of social enterprises has become a leading one.

Keywords: B-corps, embedded into core business, first inside, unique advantage

Procedia PDF Downloads 178
148 An Empirical Study of the Moderation Effects of Commitment, Trust, and Relationship Value in the Relation of Goods and Services Related to Business to Business Brand Images on Customer Loyalty

Authors: Jorge Luis Morales Romero, Enrique Murillo Othón

Abstract:

Business to business (B2B) relationships generally go beyond a purely profit-based result, with firms seeking to maintain a relationship for many years because a breakup or getting a new supplier can be very costly. Therefore, identifying the factors which determine a successful relationship in the long term is of great interest to companies. That is why their reputation and the brand image that customers have of them are among the main factors that can achieve a successful relationship; Because of the positive effect which is driven by the client’s loyalty. Additionally, the perception that a customer may have about a brand is different when it is related to goods or to services. Thereby, they create in their minds their own brand image of it based on the past experiences they have had; Thus, a positive relationship is established between goods-related brand image, service-related brand image, and customer loyalty. The present investigation examines the boundary conditions of said relationship by testing the moderating effects of trust, commitment, and relationship value in a B2B environment. All the variables were tested independently as moderators for service-related brand image/loyalty and for goods-related brand image/loyalty, as they are assumed to be separate variables. Survey data was collected through interviews with customers that have both a product-buying relationship and a service relationship with a global B2B brand of healthcare equipment operating in the Mexican healthcare market. Interviewed respondents were either the user or the purchasing manager and/or the responsible for the equipment maintenance for the customer organization. Hence, they were appropriate informants regarding the B2B relationship with this healthcare brand. The moderation models were estimated using the PROCESS macro for the Statistical Package for the Social Sciences Software (SPSS). Results show statistical evidence that both Relationship Value and Trust are significant moderators for the service-related brand image/loyalty relation but not significant for the goods-related brand/loyalty relation. On the other hand, Commitment results in a significant moderator for the goods-related brand/loyalty relation but is not significant for the service-related brand image/loyalty relation.

Keywords: commitment, trust, relationship value, loyalty, B2B, moderator

Procedia PDF Downloads 64
147 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications

Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert

Abstract:

Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.

Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores

Procedia PDF Downloads 123
146 Autonomous Vehicle Detection and Classification in High Resolution Satellite Imagery

Authors: Ali J. Ghandour, Houssam A. Krayem, Abedelkarim A. Jezzini

Abstract:

High-resolution satellite images and remote sensing can provide global information in a fast way compared to traditional methods of data collection. Under such high resolution, a road is not a thin line anymore. Objects such as cars and trees are easily identifiable. Automatic vehicles enumeration can be considered one of the most important applications in traffic management. In this paper, autonomous vehicle detection and classification approach in highway environment is proposed. This approach consists mainly of three stages: (i) first, a set of preprocessing operations are applied including soil, vegetation, water suppression. (ii) Then, road networks detection and delineation is implemented using built-up area index, followed by several morphological operations. This step plays an important role in increasing the overall detection accuracy since vehicles candidates are objects contained within the road networks only. (iii) Multi-level Otsu segmentation is implemented in the last stage, resulting in vehicle detection and classification, where detected vehicles are classified into cars and trucks. Accuracy assessment analysis is conducted over different study areas to show the great efficiency of the proposed method, especially in highway environment.

Keywords: remote sensing, object identification, vehicle and road extraction, vehicle and road features-based classification

Procedia PDF Downloads 206
145 Exploring Well-Being: Lived Experiences and Assertions From a Marginalized Perspective

Authors: Ritwik Saha, Anindita Chaudhuri

Abstract:

The psychological dimension of work-based mobility of the contemporary time in the context of the ever-changing socio-economic process mounting the interest to address the consequential issues of quality of life and well-being of the migrant section of society. The negotiation with the fluidity of the job market and the changing psychosocial dimensions within and between psychosocial relations may disentangle the resilience as well as the mechanism of diligence toward migrant (marginal) life. The work-based mobility and its associated phenomena have highly impacted the migrant’s quality of life especially the marginalized (socioeconomically weak) ones along with their family members staying away from them. The subjective experiences of the journey of their migrant life and reconstruction of the psychosocial being in terms of existence and well-being at the host place are the minimal addressed issues in migrant literature. Hence this gap instigates to bring forth the issue with the present study exploring the phenomenal aspects of lived experiences, resilience, and sense-making of the well-being of migrant living by the marginalized migrant people engaging in unorganized space. In doing so qualitative research method was followed, and semi-structured interviews were used for data collection from the four selected migrant groups (Fuchkawala, Bhunjawala, Bhari - drinking water supplier, Construction worker) as they migrated to Kolkata and its metropolis area from different states of India, Five participants from each group (20 participants in total) age range between 20 to 45 were interviewed physically and participants’ observatory notes were taken to capture their lived experiences, audio recordings were transcribed and analyzed systematically following Charmaz’s three-layer coding of grounded theory. Being truthful to daily industry, the strong desire to build children’s future, the mastering mechanism to dual existence, use of traditional social network these four themes emerges after analysis of the data. However, incorporating fate as their usual way of life and making sense of well-being through their assertion is another evolving aspect of migrant life.

Keywords: lived experiences, marginal living, resilience, sense-making process, well-being

Procedia PDF Downloads 35
144 An Event-Related Potential Study of Individual Differences in Word Recognition: The Evidence from Morphological Knowledge of Sino-Korean Prefixes

Authors: Jinwon Kang, Seonghak Jo, Joohee Ahn, Junghye Choi, Sun-Young Lee

Abstract:

A morphological priming has proved its importance by showing that segmentation occurs in morphemes when visual words are recognized within a noticeably short time. Regarding Sino-Korean prefixes, this study conducted an experiment on visual masked priming tasks with 57 ms stimulus-onset asynchrony (SOA) to see how individual differences in the amount of morphological knowledge affect morphological priming. The relationship between the prime and target words were classified as morphological (e.g., 미개척 migaecheog [unexplored] – 미해결 mihaegyel [unresolved]), semantical (e.g., 친환경 chinhwangyeong [eco-friendly]) – 무공해 mugonghae [no-pollution]), and orthographical (e.g., 미용실 miyongsil [beauty shop] – 미확보 mihwagbo [uncertainty]) conditions. We then compared the priming by configuring irrelevant paired stimuli for each condition’s control group. As a result, in the behavioral data, we observed facilitatory priming from a group with high morphological knowledge only under the morphological condition. In contrast, a group with low morphological knowledge showed the priming only under the orthographic condition. In the event-related potential (ERP) data, the group with high morphological knowledge presented the N250 only under the morphological condition. The findings of this study imply that individual differences in morphological knowledge in Korean may have a significant influence on the segmental processing of Korean word recognition.

Keywords: ERP, individual differences, morphological priming, sino-Korean prefixes

Procedia PDF Downloads 185
143 Technology Transfer of Indigenous Technologies: Emerging Aid to Indian Health Sector

Authors: Tripta Dixit, Smita Sahu, William Selvamurthy, Sadhana Srivastava

Abstract:

India is battling with the issues of accessibility, affordability and availability of quality health to the masses. Indian medical heritage which dated back to 3000 BC unveils the rich knowledge pool which has undergone a perceptible change over years, such as eradication of many communicable diseases, increasing individual awareness of quality health and import driven medical device market etc. Despite a slew of initiatives the holistic slogan of ‘health for all’ remains elusive and a concern for the nation. The 21st-century projects a myriad of challenges like cultural diversity, large population, demographic dividend and geographical segmentation leading to varied needs of people as per their regional conditions of climate, disease prevalence, nutrition and sanitation. But these challenges are also opportunities for the development of indigenous, low cost and accessible technologies to tackle them. This requires reinforcing the potential of indigenous technologies in coordination with prevailing health issues in various regions of country. This paper emphasis on the strategy for exploring the indigenous technologies with entrusted up-scaling to meet the diverse needs of the people. This review proposes to adopt technology transfer as a strategy to establish a vibrant ecosystem for identifying and up-scaling the indigenous medical technologies with diligent hand-holding for public health.

Keywords: health, indigenous, medical technology, technology transfer

Procedia PDF Downloads 229
142 Random Forest Classification for Population Segmentation

Authors: Regina Chua

Abstract:

To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.

Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling

Procedia PDF Downloads 72
141 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms

Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao

Abstract:

Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.

Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50

Procedia PDF Downloads 119
140 Intelligent Fishers Harness Aquatic Organisms and Climate Change

Authors: Shih-Fang Lo, Tzu-Wei Guo, Chih-Hsuan Lee

Abstract:

Tropical fisheries are vulnerable to the physical and biogeochemical oceanic changes associated with climate change. Warmer temperatures and extreme weather have beendamaging the abundance and growth patterns of aquatic organisms. In recent year, the shrinking of fish stock and labor shortage have increased the threat to global aquacultural production. Thus, building a climate-resilient and sustainable mechanism becomes an urgent, important task for global citizens. To tackle the problem, Taiwanese fishermen applies the artificial intelligence (AI) technology. In brief, the AI system (1) measures real-time water quality and chemical parameters infish ponds; (2) monitors fish stock through segmentation, detection, and classification; and (3) implements fishermen’sprevious experiences, perceptions, and real-life practices. Applying this system can stabilize the aquacultural production and potentially increase the labor force. Furthermore, this AI technology can build up a more resilient and sustainable system for the fishermen so that they can mitigate the influence of extreme weather while maintaining or even increasing their aquacultural production. In the future, when the AI system collected and analyzed more and more data, it can be applied to different regions of the world or even adapt to the future technological or societal changes, continuously providing the most relevant and useful information for fishermen in the world.

Keywords: aquaculture, artificial intelligence (AI), real-time system, sustainable fishery

Procedia PDF Downloads 91
139 [Keynote Talk]: Green Supply Chain Management Concepts Applied on Brazilian Animal Nutrition Industries

Authors: Laura G. Caixeta, Maico R. Severino

Abstract:

One of the biggest challenges that the industries find nowadays is to incorporate sustainability practices into its operations. The Green Supply Chain Management (GSCM) concept assists industries in such incorporation. For the full application of this concept is important that enterprises of a same supply chain have the GSCM practices coordinated among themselves. Note that this type of analyses occurs on the context of developed countries and sectors considered big impactors (as automotive, mineral, among others). The propose of this paper is to analyze as the GSCM concepts are applied on the Brazilian animal nutrition industries. The method used was the Case Study. For this, it was selected a supply chain relationship composed by animal nutrition products manufacturer (Enterprise A) and its supplier of animal waste, such as blood, viscera, among others (Enterprise B). First, a literature review was carried out to identify the main GSCM practices. Second, it was done an individual analysis of each one selected enterprise of the application of GSCM concept. For the observed practices, the coordination of each practice in this supply chain was studied. And, it was developed propose of GSCM applications for the practices no observed. The findings of this research were: a) the systematization of main GSCM practices, as: Internal Environment Management, Green Consumption, Green Design, Green Manufacturing, Green Marketing, Green Packaging, Green Procurement, Green Recycling, Life Cycle Analysis, Consultation Selection Method, Environmental Risk Sharing, Investment Recovery, and Reduced Transportation Time; b) the identification of GSCM practices on Enterprise A (7 full application, 3 partial application and 3 no application); c) the identification of GSCM practices on Enterprise B (2 full application, 2 partial application and 9 no application); d) the identification of how is the incentive and the coordination of the GSCM practices on this relationship by Enterprise A; e) proposals of application and coordination of the others GSCM practices on this supply chain relationship. Based on the study, it can be concluded that its possible apply GSCM on animal nutrition industries, and when occurs the motivation on the application of GSCM concepts by a supply chain echelon, these concepts are deployed for the others supply chain echelons by the coordination (orchestration) of the first echelon.

Keywords: animal nutrition industries, coordination, green supply chain management, supply chain management, sustainability

Procedia PDF Downloads 108
138 Forest Harvesting Policies and Practices in Tropical Forest of Terengganu, Malaysia: Industry Experiences

Authors: Mohd Zaki Hamzah, Roslan Rani, Ahmad Bazli Razali, Satiful Bahri Mamat, Abdul Hadi Ripin, Mohd Harun Esa

Abstract:

Ever since 1901, forest management and silviculture practices in Malaysia have been frequently reviewed and updated to take into account changes in forest conditions, markets, timber demand/supply and technical advances that can be achieved in industrial processes, logging and forest harvesting, and currently, the forest management system practiced in Peninsular Malaysia is the Selective Management System (SMS) which was introduced in 1978. This system requires the selection of management regime (felling) based on Pre-Felling Forest Inventory (Pre-F) data to ensure economical harvesting and also ensuring adequate standing stands for subsequent rounds of felling, while maintaining ecological balance and environmental quality. SMS regulates forest harvesting through area and volume controls, with the cutting cycle 30 years. Most of the forest management units (FMU) (in Peninsular Malaysia) implementing SMS have been certified by Forest Stewardship Council (FSC) and/or Program for Endorsement of Forest Certification (PEFC), and one such FMU belongs to Kumpulan Pengurusan Kayu Kayan Terengganu (KPKKT). KPKKT, a timber management subsidiary of Golden Pharos Berhad (GPB), adopts the SMS to manage its 108,900 ha of timber concessionary areas in its role as logs’ supplier for the consumption of three subsidiaries of GPB. KPKKT is also responsible for the sustainable development and management of its concession in accordance with the Sustainable Forest Management (SFM) standards to ensure that it addresses the loss of forest cover and forest degradation, forest-based economic, social and environmental benefits, and ecologically protecting forests while mobilising financial resources for the implementation of sustainable forest management planning, harvesting, monitoring and the marketing of products. This paper will detail out the management and harvesting guidelines imposed by the controlling government agency, and harvesting processes taken by KPKKT to comply with guidelines and eventually supplying timber to the relevant subsidiaries (downstream mills under GPB).

Keywords: sustainable forest management, silviculture, reduce impact logging, forest certification

Procedia PDF Downloads 61
137 Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform

Authors: Bamidele Samson Alobalorun, Ifedotun Roseline Idowu

Abstract:

Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model.

Keywords: Daugman rubber sheet, feature extraction, Hamming distance, iris recognition system, 2D Gabor wavelet transform

Procedia PDF Downloads 39
136 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams

Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina

Abstract:

It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.

Keywords: algorithm, radiology, tuberculosis, x-rays exam

Procedia PDF Downloads 388
135 Podcasting: A Tool for an Enhanced Learning Experience of Introductory Courses to Science and Engineering Students

Authors: Yaser E. Greish, Emad F. Hindawy, Maryam S. Al Nehayan

Abstract:

Introductory courses such as General Chemistry I, General Physics I and General Biology need special attention as students taking these courses are usually at their first year of the university. In addition to the language barrier for most of them, they also face other difficulties if these elementary courses are taught in the traditional way. Changing the routine method of teaching of these courses is therefore mandated. In this regard, podcasting of chemistry lectures was used as an add-on to the traditional and non-traditional methods of teaching chemistry to science and non-science students. Podcasts refer to video files that are distributed in a digital format through the Internet using personal computers or mobile devices. Pedagogical strategy is another way of identifying podcasts. Three distinct teaching approaches are evident in the current literature and include receptive viewing, problem-solving, and created video podcasts. The digital format and dispensing of video podcasts have stabilized over the past eight years, the type of podcasts vary considerably according to their purpose, degree of segmentation, pedagogical strategy, and academic focus. In this regard, the whole syllabus of 'General Chemistry I' course was developed as podcasts and were delivered to students throughout the semester. Students used the podcasted files extensively during their studies, especially as part of their preparations for exams. Feedback of students strongly supported the idea of using podcasting as it reflected its effect on the overall understanding of the subject, and a consequent improvement of their grades.

Keywords: podcasting, introductory course, interactivity, flipped classroom

Procedia PDF Downloads 243
134 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy

Authors: Hao Wang, Shengchun Wang, Weidong Wang

Abstract:

On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.

Keywords: curvature entropy, profile registration, rail wear, structured light, train-running

Procedia PDF Downloads 232
133 Information Management Approach in the Prediction of Acute Appendicitis

Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki

Abstract:

This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.

Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree

Procedia PDF Downloads 325
132 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers

Authors: C. V. Aravinda, H. N. Prakash

Abstract:

In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.

Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages

Procedia PDF Downloads 473
131 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction

Authors: Jingjie Li, Wenjie Hu

Abstract:

Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.

Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure

Procedia PDF Downloads 140
130 Understanding Indonesian Smallholder Dairy Farmers’ Decision to Adopt Multiple Farm: Level Innovations

Authors: Rida Akzar, Risti Permani, Wahida , Wendy Umberger

Abstract:

Adoption of farm innovations may increase farm productivity, and therefore improve market access and farm incomes. However, most studies that look at the level and drivers of innovation adoption only focus on a specific type of innovation. Farmers may consider multiple innovation options, and constraints such as budget, environment, scarcity of labour supply, and the cost of learning. There have been some studies proposing different methods to combine a broad variety of innovations into a single measurable index. However, little has been done to compare these methods and assess whether they provide similar information about farmer segmentation by their ‘innovativeness’. Using data from a recent survey of 220 dairy farm households in West Java, Indonesia, this study compares and considers different methods of deriving an innovation index, including expert-weighted innovation index; an index derived from the total number of adopted technologies; and an index of the extent of adoption of innovation taking into account both adoption and disadoption of multiple innovations. Second, it examines the distribution of different farming systems taking into account their innovativeness and farm characteristics. Results from this study will inform policy makers and stakeholders in the dairy industry on how to better design, target and deliver programs to improve and encourage farm innovation, and therefore improve farm productivity and the performance of the dairy industry in Indonesia.

Keywords: adoption, dairy, household survey, innovation index, Indonesia, multiple innovations dairy, West Java

Procedia PDF Downloads 312
129 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 121
128 The Impact of Neighborhood Effects on the Economic Mobility of the Inhabitants of Three Segregated Communities in Salvador (Brazil)

Authors: Stephan Treuke

Abstract:

The paper analyses the neighbourhood effects on the economic mobility of the inhabitants of three segregated communities of Salvador (Brazil), in other words, the socio-economic advantages and disadvantages affecting the lives of poor people due to their embeddedness in specific socio-residential contexts. Recent studies performed in Brazilian metropolis have concentrated on the structural dimensions of negative externalities in order to explain neighbourhood-level variations in a field of different phenomena (delinquency, violence, access to the labour market and education) in spatial isolated and socially homogeneous slum areas (favelas). However, major disagreement remains whether the contiguity between residents of poor neighbourhoods and higher-class condominio-dwellers provides structures of opportunities or whether it fosters socio-spatial stigmatization. Based on a set of interviews, investigating the variability of interpersonal networks and their activation in the struggle for economic inclusion, the study confirms that the proximity of Nordeste de Amaralina to middle-/upper-class communities affects positively the access to labour opportunities. Nevertheless, residential stigmatization, as well as structures of social segmentation, annihilate these potentials. The lack of exposition to individuals and groups extrapolating from the favela’s social, educational and cultural context restricts the structures of opportunities to local level. Therefore, residents´ interpersonal networks reveal a high degree of redundancy and localism, based on bonding ties connecting family and neighbourhood members. The resilience of segregational structures in Plataforma contributes to the naturalization of social distance patters. It’s embeddedness in a socially homogeneous residential area (Subúrbio Ferroviário), growing informally and beyond official urban politics, encourages the construction of isotopic patterns of sociability, sharing the same values, social preferences, perspectives and behaviour models. Whereas it’s spatial isolation correlates with the scarcity of economic opportunities, the social heterogeneity of Fazenda Grande II interviewees and the socialising effects of public institutions mitigate the negative repercussions of segregation. The networks’ composition admits a higher degree of heterophilia and a greater proportion of bridging ties accounting for the access to broader information actives and facilitating economic mobility. The variability observed within the three different scenarios urges to reflect about the responsability of urban politics when it comes to the prevention or consolidation of the social segregation process in Salvador. Instead of promoting the local development of the favela Plataforma, public housing programs priorize technocratic habitational solutions without providing the residents’ socio-economic integration. The impact of negative externalities related to the homogeneously poor neighbourhood is potencialized in peripheral areas, turning its’ inhabitants socially invisible, thus being isolated from other social groups. The example of Nordeste de Amaralina portrays the failing interest of urban politics to bridge the social distances structuring the brazilian society’s rigid stratification model, founded on mecanisms of segmentation (unequal access to labour market and education system, public transport, social security and law protection) and generating permanent conflicts between the two socioeconomically distant groups living in geographic contiguity. Finally, in the case of Fazenda Grande II, the public investments in both housing projects and complementary infrastructure (e.g. schools, hospitals, community center, police stations, recreation areas) contributes to the residents’ socio-economic inclusion.

Keywords: economic mobility, neighborhood effects, Salvador, segregation

Procedia PDF Downloads 254
127 Russian pipeline natural gas export strategy under uncertainty

Authors: Koryukaeva Ksenia, Jinfeng Sun

Abstract:

Europe has been a traditional importer of Russian natural gas for more than 50 years. In 2021, Russian state-owned company Gazprom supplied about a third of all gas consumed in Europe. The Russia-Europe mutual dependence in terms of natural gas supplies has been causing many concerns about the energy security of the two sides for a long period of time. These days the issue has become more urgent than ever considering recent Russian invasion in Ukraine followed by increased large-scale geopolitical conflicts, making the future of Russian natural gas supplies and global gas markets as well highly uncertain. Hence, the main purpose of this study is to get insight into the possible futures of Russian pipeline natural gas exports by a scenario planning method based on Monte-Carlo simulation within LUSS model framework, and propose Russian pipeline natural gas export strategies based on the obtained scenario planning results. The scenario analysis revealed that recent geopolitical disputes disturbed the traditional, longstanding model of Russian pipeline gas exports, and, as a result, the prospects and the pathways for Russian pipeline gas on the world markets will differ significantly from those before 2022. Specifically, our main findings show, that (i) the events of 2022 generated many uncertainties for the long-term future of Russian pipeline gas export perspectives on both western and eastern supply directions, including geopolitical, regulatory, economic, infrastructure and other uncertainties; (ii) according to scenario modelling results, Russian pipeline exports will face many challenges in the future, both on western and eastern directions. A decrease in pipeline gas exports will inevitably affect country’s natural gas production and significantly reduce fossil fuel export revenues, jeopardizing the energy security of the country; (iii) according to proposed strategies, in order to ensure the long-term stable export supplies in the changing environment, Russia may need to adjust its traditional export strategy by performing export flows and product diversification, entering new markets, adapting its contracting mechanism, increasing competitiveness and gaining a reputation of a reliable gas supplier.

Keywords: Russian natural gas, Pipeline natural gas, Uncertainty, Scenario simulation, Export strategy

Procedia PDF Downloads 33
126 Image Processing of Scanning Electron Microscope Micrograph of Ferrite and Pearlite Steel for Recognition of Micro-Constituents

Authors: Subir Gupta, Subhas Ganguly

Abstract:

In this paper, we demonstrate the new area of application of image processing in metallurgical images to develop the more opportunity for structure-property correlation based approaches of alloy design. The present exercise focuses on the development of image processing tools suitable for phrase segmentation, grain boundary detection and recognition of micro-constituents in SEM micrographs of ferrite and pearlite steels. A comprehensive data of micrographs have been experimentally developed encompassing the variation of ferrite and pearlite volume fractions and taking images at different magnification (500X, 1000X, 15000X, 2000X, 3000X and 5000X) under scanning electron microscope. The variation in the volume fraction has been achieved using four different plain carbon steel containing 0.1, 0.22, 0.35 and 0.48 wt% C heat treated under annealing and normalizing treatments. The obtained data pool of micrographs arbitrarily divided into two parts to developing training and testing sets of micrographs. The statistical recognition features for ferrite and pearlite constituents have been developed by learning from training set of micrographs. The obtained features for microstructure pattern recognition are applied to test set of micrographs. The analysis of the result shows that the developed strategy can successfully detect the micro constitutes across the wide range of magnification and variation of volume fractions of the constituents in the structure with an accuracy of about +/- 5%.

Keywords: SEM micrograph, metallurgical image processing, ferrite pearlite steel, microstructure

Procedia PDF Downloads 173