Search results for: semantic processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4073

Search results for: semantic processing

2483 Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features

Authors: Vesna Kirandziska, Nevena Ackovska, Ana Madevska Bogdanova

Abstract:

The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.

Keywords: emotion recognition, facial recognition, signal processing, machine learning

Procedia PDF Downloads 313
2482 Thoughts on the Informatization Technology Innovation of Cores and Samples in China

Authors: Honggang Qu, Rongmei Liu, Bin Wang, Yong Xu, Zhenji Gao

Abstract:

There is a big gap in the ability and level of the informatization technology innovation of cores and samples compared with developed countries. Under the current background of promoting the technology innovation, how to strengthen the informatization technology innovation of cores and samples for National Cores and Samples Archives, which is a national innovation research center, is an important research topic. The paper summarizes the development status of cores and samples informatization technology, and finds the gaps and deficiencies, and proposes the innovation research directions and content, including data extraction, recognition, processing, integration, application and so on, so as to provide some reference and guidance for the future innovation research of the archives and support better the geological technology innovation in China.

Keywords: cores and samples;, informatization technology;, innovation;, suggestion

Procedia PDF Downloads 119
2481 A Comparative Study on the Positive and Negative of Electronic Word-of-Mouth on the SERVQUAL Scale-Take A Certain Armed Forces General Hospital in Taiwan As An Example

Authors: Po-Chun Lee, Li-Lin Liang, Ching-Yuan Huang

Abstract:

Purpose: Research on electronic word-of-mouth (eWOM)& online review has been widely used in service industry management research in recent years. The SERVQUAL scale is the most commonly used method to measure service quality. Therefore, the purpose of this research is to combine electronic word of mouth & online review with the SERVQUAL scale. To explore the comparative study of positive and negative electronic word-of-mouth reviews of a certain armed force general hospital in Taiwan. Data sources: This research obtained online word-of-mouth comment data on google maps from a military hospital in Taiwan in the past ten years through Internet data mining technology. Research methods: This study uses the semantic content analysis method to classify word-of-mouth reviews according to the revised PZB SERVQUAL scale. Then carry out statistical analysis. Results of data synthesis: The results of this study disclosed that the negative reviews of this military hospital in Taiwan have been increasing year by year. Under the COVID-19 epidemic, positive word-of-mouth has a downward trend. Among the five determiners of SERVQUAL of PZB, positive word-of-mouth reviews performed best in “Assurance,” with a positive review rate of 58.89%, Followed by 43.33% of “Responsiveness.” In negative word-of-mouth reviews, “Assurance” performed the worst, with a positive rate of 70.99%, followed by responsive 29.01%. Conclusions: The important conclusions of this study disclosed that the total number of electronic word-of-mouth reviews of the military hospital has revealed positive growth in recent years, and the positive word-of-mouth growth has revealed negative growth after the epidemic of COVID-19, while the negative word-of-mouth has grown substantially. Regardless of the positive and negative comments, what patients care most about is “Assurance” of the professional attitude and skills of the medical staff, which needs to be strengthened most urgently. In addition, good “Reliability” will help build positive word-of-mouth. However, poor “Responsiveness” can easily lead to the spread of negative word-of-mouth. This study suggests that the hospital should focus on these few service-oriented quality management and audits.

Keywords: quality of medical service, electronic word-of-mouth, armed forces general hospital

Procedia PDF Downloads 174
2480 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 153
2479 Challenges in Video Based Object Detection in Maritime Scenario Using Computer Vision

Authors: Dilip K. Prasad, C. Krishna Prasath, Deepu Rajan, Lily Rachmawati, Eshan Rajabally, Chai Quek

Abstract:

This paper discusses the technical challenges in maritime image processing and machine vision problems for video streams generated by cameras. Even well documented problems of horizon detection and registration of frames in a video are very challenging in maritime scenarios. More advanced problems of background subtraction and object detection in video streams are very challenging. Challenges arising from the dynamic nature of the background, unavailability of static cues, presence of small objects at distant backgrounds, illumination effects, all contribute to the challenges as discussed here.

Keywords: autonomous maritime vehicle, object detection, situation awareness, tracking

Procedia PDF Downloads 453
2478 Practical Guide To Design Dynamic Block-Type Shallow Foundation Supporting Vibrating Machine

Authors: Dodi Ikhsanshaleh

Abstract:

When subjected to dynamic load, foundation oscillates in the way that depends on the soil behaviour, the geometry and inertia of the foundation and the dynamic exctation. The practical guideline to analysis block-type foundation excitated by dynamic load from vibrating machine is presented. The analysis use Lumped Mass Parameter Method to express dynamic properties such as stiffness and damping of soil. The numerical examples are performed on design block-type foundation supporting gas turbine compressor which is important equipment package in gas processing plant

Keywords: block foundation, dynamic load, lumped mass parameter

Procedia PDF Downloads 487
2477 An Eigen-Approach for Estimating the Direction-of Arrival of Unknown Number of Signals

Authors: Dia I. Abu-Al-Nadi, M. J. Mismar, T. H. Ismail

Abstract:

A technique for estimating the direction-of-arrival (DOA) of unknown number of source signals is presented using the eigen-approach. The eigenvector corresponding to the minimum eigenvalue of the autocorrelation matrix yields the minimum output power of the array. Also, the array polynomial with this eigenvector possesses roots on the unit circle. Therefore, the pseudo-spectrum is found by perturbing the phases of the roots one by one and calculating the corresponding array output power. The results indicate that the DOAs and the number of source signals are estimated accurately in the presence of a wide range of input noise levels.

Keywords: array signal processing, direction-of-arrival, antenna arrays, Eigenvalues, Eigenvectors, Lagrange multiplier

Procedia PDF Downloads 331
2476 A Case Study of Limited Dynamic Voltage Frequency Scaling in Low-Power Processors

Authors: Hwan Su Jung, Ahn Jun Gil, Jong Tae Kim

Abstract:

Power management techniques are necessary to save power in the microprocessor. By changing the frequency and/or operating voltage of processor, DVFS can control power consumption. In this paper, we perform a case study to find optimal power state transition for DVFS. We propose the equation to find the optimal ratio between executions of states while taking into account the deadline of processing time and the power state transition delay overhead. The experiment is performed on the Cortex-M4 processor, and average 6.5% power saving is observed when DVFS is applied under the deadline condition.

Keywords: deadline, dynamic voltage frequency scaling, power state transition

Procedia PDF Downloads 453
2475 Floodnet: Classification for Post Flood Scene with a High-Resolution Aerial Imaginary Dataset

Authors: Molakala Mourya Vardhan Reddy, Kandimala Revanth, Koduru Sumanth, Beena B. M.

Abstract:

Emergency response and recovery operations are severely hampered by natural catastrophes, especially floods. Understanding post-flood scenarios is essential to disaster management because it facilitates quick evaluation and decision-making. To this end, we introduce FloodNet, a brand-new high-resolution aerial picture collection created especially for comprehending post-flood scenes. A varied collection of excellent aerial photos taken during and after flood occurrences make up FloodNet, which offers comprehensive representations of flooded landscapes, damaged infrastructure, and changed topographies. The dataset provides a thorough resource for training and assessing computer vision models designed to handle the complexity of post-flood scenarios, including a variety of environmental conditions and geographic regions. Pixel-level semantic segmentation masks are used to label the pictures in FloodNet, allowing for a more detailed examination of flood-related characteristics, including debris, water bodies, and damaged structures. Furthermore, temporal and positional metadata improve the dataset's usefulness for longitudinal research and spatiotemporal analysis. For activities like flood extent mapping, damage assessment, and infrastructure recovery projection, we provide baseline standards and evaluation metrics to promote research and development in the field of post-flood scene comprehension. By integrating FloodNet into machine learning pipelines, it will be easier to create reliable algorithms that will help politicians, urban planners, and first responders make choices both before and after floods. The goal of the FloodNet dataset is to support advances in computer vision, remote sensing, and disaster response technologies by providing a useful resource for researchers. FloodNet helps to create creative solutions for boosting communities' resilience in the face of natural catastrophes by tackling the particular problems presented by post-flood situations.

Keywords: image classification, segmentation, computer vision, nature disaster, unmanned arial vehicle(UAV), machine learning.

Procedia PDF Downloads 74
2474 Fabrication of ZnO Nanorods Based Biosensor via Hydrothermal Method

Authors: Muhammad Tariq, Jafar Khan Kasi, Samiullah, Ajab Khan Kasi

Abstract:

Biosensors are playing vital role in industrial, clinical, and chemical analysis applications. Among other techniques, ZnO based biosensor is an easy approach due to its exceptional chemical and electrical properties. ZnO nanorods have positively charged isoelectric point which helps immobilize the negative charge glucose oxides (GOx). Here, we report ZnO nanorods based biosensors for the immobilization of GOx. The ZnO nanorods were grown by hydrothermal method on indium tin oxide substrate (ITO). The fabrication of biosensors was carried through batch processing using conventional photolithography. The buffer solutions of GOx were prepared in phosphate with a pH value of around 7.3. The biosensors effectively immobilized the GOx and result was analyzed by calculation of voltage and current on nanostructures.

Keywords: hydrothermal growth, sol-gel, zinc dioxide, biosensors

Procedia PDF Downloads 297
2473 Application of Optical Method Based on Laser Devise as Non-Destructive Testing for Calculus of Mechanical Deformation

Authors: R. Daïra, V. Chalvidan

Abstract:

We present the speckle interferometry method to determine the deformation of a piece. This method of holographic imaging using a CCD camera for simultaneous digital recording of two states object and reference. The reconstruction is obtained numerically. This latest method has the advantage of being simpler than the methods currently available, and it does not suffer the holographic configuration faults online. Furthermore, it is entirely digital and avoids heavy analysis after recording the hologram. This work was carried out in the laboratory HOLO 3 (optical metrology laboratory in Saint Louis, France) and it consists in controlling qualitatively and quantitatively the deformation of object by using a camera CCD connected to a computer equipped with software of Fringe Analysis.

Keywords: speckle, nondestructive testing, interferometry, image processing

Procedia PDF Downloads 493
2472 Joint Simulation and Estimation for Geometallurgical Modeling of Crushing Consumption Energy in the Mineral Processing Plants

Authors: Farzaneh Khorram, Xavier Emery

Abstract:

In this paper, it is aimed to create a crushing consumption energy (CCE) block model and determine the blocks with the potential to have the maximum grinding process energy consumption for the study area. For this purpose, a joint estimate (co-kriging) and joint simulation (turning band method and plurigaussian methods) to predict the CCE based on its correlation with SAG power index (SPI), A×B, and ball mill bond work Index (BWI). The analysis shows that TBCOSIM and plurigaussian have the more realistic results compared to cokriging. It seems logical due to the nature of the data geometallurgical and the linearity of the kriging method and the smoothing effect of kriging.

Keywords: plurigaussian, turning band, cokriging, geometallurgy

Procedia PDF Downloads 63
2471 Interactive, Topic-Oriented Search Support by a Centroid-Based Text Categorisation

Authors: Mario Kubek, Herwig Unger

Abstract:

Centroid terms are single words that semantically and topically characterise text documents and so may serve as their very compact representation in automatic text processing. In the present paper, centroids are used to measure the relevance of text documents with respect to a given search query. Thus, a new graphbased paradigm for searching texts in large corpora is proposed and evaluated against keyword-based methods. The first, promising experimental results demonstrate the usefulness of the centroid-based search procedure. It is shown that especially the routing of search queries in interactive and decentralised search systems can be greatly improved by applying this approach. A detailed discussion on further fields of its application completes this contribution.

Keywords: search algorithm, centroid, query, keyword, co-occurrence, categorisation

Procedia PDF Downloads 278
2470 Influence of Chemical Treatment on Elastic Properties of the Band Cotton Crepe 100%

Authors: Bachir Chemani, Rachid Halfaoui, Madani Maalem

Abstract:

The manufacturing technology of band cotton is very delicate and depends to choice of certain parameters such as torsion of warp yarn. The fabric elasticity is achieved without the use of any elastic material, chemical expansion, artificial or synthetic and it’s capable of creating pressures useful for therapeutic treatments.Before use, the band is subjected to treatments of specific preparation for obtaining certain elasticity, however, during its treatment, there are some regression parameters. The dependence of manufacturing parameters on the quality of the chemical treatment was confirmed. The aim of this work is to improve the properties of the fabric through the development of manufacturing technology appropriately. Finally for the treatment of the strip pancake 100% cotton, a treatment method is recommended.

Keywords: elastic, cotton, processing, torsion

Procedia PDF Downloads 383
2469 Effect of Carbon Nanotubes on Nanocomposite from Nanofibrillated Cellulose

Authors: M. Z. Shazana, R. Rosazley, M. A. Izzati, A. W. Fareezal, I. Rushdan, A. B. Suriani, S. Zakaria

Abstract:

There is an increasing interest in the development of flexible energy storage for application of Carbon Nanotubes and nanofibrillated cellulose (NFC). In this study, nanocomposite is consisting of Carbon Nanotube (CNT) mixed with suspension of nanofibrillated cellulose (NFC) from Oil Palm Empty Fruit Bunch (OPEFB). The use of Carbon Nanotube (CNT) as additive nanocomposite was improved the conductivity and mechanical properties of nanocomposite from nanofibrillated cellulose (NFC). The nanocomposite were characterized for electrical conductivity and mechanical properties in uniaxial tension, which were tensile to measure the bond of fibers in nanocomposite. The processing route is environmental friendly which leads to well-mixed structures and good results as well.

Keywords: carbon nanotube (CNT), nanofibrillated cellulose (NFC), mechanical properties, electrical conductivity

Procedia PDF Downloads 331
2468 Slow Pyrolysis of Bio-Wastes: Environmental, Exergetic, and Energetic (3E) Assessment

Authors: Daniela Zalazar-Garcia, Erick Torres, German Mazza

Abstract:

Slow pyrolysis of a pellet of pistachio waste was studied using a lab-scale stainless-steel reactor. Experiments were conducted at different heating rates (5, 10, and 15 K/min). A 3-E (environmental, exergetic, and energetic) analysis for the processing of 20 kg/h of bio-waste was carried out. Experimental results showed that biochar and gas yields decreased with an increase in the heating rate (43 to 36 % and 28 to 24 %, respectively), while the bio-oil yield increased (29 to 40 %). Finally, from the 3-E analysis and the experimental results, it can be suggested that an increase in the heating rate resulted in a higher pyrolysis exergetic efficiency (70 %) due to an increase of the bio-oil yield with high-energy content.

Keywords: 3E assessment, bio-waste pellet, life cycle assessment, slow pyrolysis

Procedia PDF Downloads 216
2467 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries

Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman

Abstract:

There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.

Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems

Procedia PDF Downloads 145
2466 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology

Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James

Abstract:

Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.

Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing

Procedia PDF Downloads 129
2465 Model Development for Real-Time Human Sitting Posture Detection Using a Camera

Authors: Jheanel E. Estrada, Larry A. Vea

Abstract:

This study developed model to detect proper/improper sitting posture using the built in web camera which detects the upper body points’ location and distances (chin, manubrium and acromion process). It also established relationships of human body frames and proper sitting posture. The models were developed by training some well-known classifiers such as KNN, SVM, MLP, and Decision Tree using the data collected from 60 students of different body frames. Decision Tree classifier demonstrated the most promising model performance with an accuracy of 95.35% and a kappa of 0.907 for head and shoulder posture. Results also showed that there were relationships between body frame and posture through Body Mass Index.

Keywords: posture, spinal points, gyroscope, image processing, ergonomics

Procedia PDF Downloads 327
2464 A Modest Proposal for Deep-Sixing Propositions in the Philosophy of Language

Authors: Patrick Duffley

Abstract:

Hanks (2021) identifies three Frege-inspired commitments concerning propositions that are widely shared across the philosophy of language: (1) propositions are the primary, inherent bearers of representational properties and truth-conditions; (2) propositions are neutral representations possessing a ‘content’ that is devoid of ‘force; (3) propositions can be entertained or expressed without being asserted. Hanks then argues that the postulate of neutral content must be abandoned, and the primary bearers of truth-evaluable representation must be identified as the token acts of assertoric predication that people perform when they are thinking or speaking about the world. Propositions are ‘types of acts of predication, which derive their representational features from their tokens.’ Their role is that of ‘classificatory devices that we use for the purposes of identifying and individuating mental states and speech acts,’ so that ‘to say that Russell believes that Mont Blanc is over 4000 meters high is to classify Russell’s mental state under a certain type, and thereby distinguish that mental state from others that Russell might possess.’ It is argued in this paper that there is no need to classify an utterance of 'Russell believes that Mont Blanc is over 4000 meters high' as a token of some higher-order utterance-type in order to identify what Russell believes; the meanings of the words themselves and the syntactico-semantic relations between them are sufficient. In our view what Hanks has accomplished in effect is to build a convincing argument for dispensing with propositions completely in the philosophy of language. By divesting propositions of the role of being the primary bearers of representational properties and truth-conditions and fittingly transferring this role to the token acts of predication that people perform when they are thinking or speaking about the world, he has situated truth in its proper place and obviated any need for abstractions like propositions to explain how language can express things that are true. This leaves propositions with the extremely modest role of classifying mental states and speech acts for the purposes of identifying and individuating them. It is demonstrated here however that there is no need whatsoever to posit such abstract entities to explain how people identify and individuate such states/acts. We therefore make the modest proposal that the term ‘proposition’ be stricken from the vocabulary of philosophers of language.

Keywords: propositions, truth-conditions, predication, Frege, truth-bearers

Procedia PDF Downloads 62
2463 Researching Apache Hama: A Pure BSP Computing Framework

Authors: Kamran Siddique, Yangwoo Kim, Zahid Akhtar

Abstract:

In recent years, the technological advancements have led to a deluge of data from distinctive domains and the need for development of solutions based on parallel and distributed computing has still long way to go. That is why, the research and development of massive computing frameworks is continuously growing. At this particular stage, highlighting a potential research area along with key insights could be an asset for researchers in the field. Therefore, this paper explores one of the emerging distributed computing frameworks, Apache Hama. It is a Top Level Project under the Apache Software Foundation, based on Bulk Synchronous Processing (BSP). We present an unbiased and critical interrogation session about Apache Hama and conclude research directions in order to assist interested researchers.

Keywords: apache hama, bulk synchronous parallel, BSP, distributed computing

Procedia PDF Downloads 247
2462 Evaluation of Main Factors Affecting the Choice of a Freight Forwarder: A Sri Lankan Exporter’s Perspective

Authors: Ishani Maheshika

Abstract:

The intermediary role performed by freight forwarders in exportation has become significant in fulfilling businesses’ supply chain needs in this dynamic world. Since the success of exporter’s business is at present, highly reliant on supply chain optimization, cost efficiency, profitability, consistent service and responsiveness, the decision of selecting the most beneficial freight forwarder has become crucial for exporters. Although there are similar foreign researches, prior researches covering Sri Lankan setting are not in existence. Moreover, results vary with time, nature of industry and business environment factors. Therefore, a study from the perspective of Sri Lankan exporters was identified as a requisite to be researched. In order to identify and prioritize key factors which have affected the exporter’s decision in selecting freight forwarders in Sri Lankan context, Sri Lankan export industry was stratified into 22 sectors based on commodity using stratified sampling technique. One exporter from each sector was then selected using judgmental sampling to have a sample of 22. Factors which were identified through a pilot survey, was organized under 6 main criteria. A questionnaire was basically developed as pairwise comparisons using 9-point semantic differential scale and comparisons were done within main criteria and subcriteria. After a pre-testing, interviews and e-mail questionnaire survey were conducted. Data were analyzed using Analytic Hierarchy Process to determine priority vectors of criteria. Customer service was found to be the most important main criterion for Sri Lankan exporters. It was followed by reliability and operational efficiency respectively. The criterion of the least importance is company background and reputation. Whereas small sized exporters pay more attention to rate, reliability is the major concern among medium and large scale exporters. Irrespective of seniority of the exporter, reliability is given the prominence. Responsiveness is the most important sub criterion among Sri Lankan exporters. Consistency of judgments with respect to main criteria was verified through consistency ratio, which was less than 10%. Being more competitive, freight forwarders should come up with customized marketing strategies based on each target group’s requirements and expectations in offering services to retain existing exporters and attract new exporters.

Keywords: analytic hierarchy process, freight forwarders, main criteria, Sri Lankan exporters, subcriteria

Procedia PDF Downloads 403
2461 Obstacle Detection and Path Tracking Application for Disables

Authors: Aliya Ashraf, Mehreen Sirshar, Fatima Akhtar, Farwa Kazmi, Jawaria Wazir

Abstract:

Vision, the basis for performing navigational tasks, is absent or greatly reduced in visually impaired people due to which they face many hurdles. For increasing the navigational capabilities of visually impaired people a desktop application ODAPTA is presented in this paper. The application uses camera to capture video from surroundings, apply various image processing algorithms to get information about path and obstacles, tracks them and delivers that information to user through voice commands. Experimental results show that the application works effectively for straight paths in daylight.

Keywords: visually impaired, ODAPTA, Region of Interest (ROI), driver fatigue, face detection, expression recognition, CCD camera, artificial intelligence

Procedia PDF Downloads 546
2460 Bioactivity Evaluation of Cucurbitin Derived Enzymatic Hydrolysates

Authors: Ž. Vaštag, Lj. Popović, S. Popović

Abstract:

After cold pressing of pumpkin oil, the defatted oil cake (PUOC) was utilized as raw material for processing of bio-functional hydrolysates. In this study, the in vitro bioactivity of an alcalase (AH) and a pepsin hydrolysate (PH) prepared from the major pumpkin 12S globulin (cucurbitin) are compared. The hydrolysates were produced at optimum reaction conditions (temperature, pH) for the enzymes, during 60min. The bioactivity testing included antioxidant and angiotensin I converting enzyme inhibitory activity assays. The hydrolysates showed high potential as natural antioxidants and possibly antihypertensive agents in functional food or nutraceuticals. Additionally, preliminary studies have shown that both hydrolysates could exhibit modest α-amylase inhibitory activity, which indicates on their hypoglycemic potential.

Keywords: cucurbitin, alcalase, pepsin, protein hydrolysates, in vitro bioactivity

Procedia PDF Downloads 308
2459 Synthesis Characterisation and Evaluation of Co-Processed Wax Matrix Excipient for Controlled Release Tablets Formulation

Authors: M. Kalyan Raj, Vinay Umesh Rao, M. Sudhakar

Abstract:

The work focuses on the development of a directly compressible controlled release co-processed excipient using melt granulation technique. Erodible wax matrix systems are fabricated in which three different types of waxes are co processed separately with Maize starch in different ratios by melt granulation. The resultant free flowing powder is characterized by FTIR, NMR, Mass spectrophotometer and gel permeation chromatography. Also, controlled release tablets of Aripiprazole were formulated and dissolution profile was compared with that of the target product profile given in Zysis patent (Patent no. 20100004262) for Aripiprazole once a week formulation.

Keywords: co-processing, hot melt extrusion, direct compression, maize starch, stearic acid, aripiprazole

Procedia PDF Downloads 405
2458 Evaluation of Cognitive Benefits among Differently Abled Subjects with Video Game as Intervention

Authors: H. Nagendra, Vinod Kumar, S. Mukherjee

Abstract:

In this study, the potential benefits of playing action video game among congenitally deaf and dumb subjects is reported in terms of EEG ratio indices. The frontal and occipital lobes are associated with development of motor skills, cognition, and visual information processing and color recognition. The sixteen hours of First-Person shooter action video game play resulted in the increase of the ratios β/(α+θ) and β/θ in frontal and occipital lobes. This can be attributed to the enhancement of certain aspect of cognition among deaf and dumb subjects.

Keywords: cognitive enhancement, video games, EEG band powers, deaf and dumb subjects

Procedia PDF Downloads 434
2457 Powdered Beet Red Roots Using as Adsorbent to Removal of Methylene Blue Dye from Aqueous Solutions

Authors: Abdulali Bashir Ben Saleh

Abstract:

The powdered beet red roots (PBRR) were used as an adsorbent to remove dyes namely methylene blue dye (as a typical cationic or basic dye) from aqueous solutions. The present study shows that used beet red roots powder exhibit adsorption trend for the dye. The adsorption processes were carried out at various conditions of concentrations, processing time and a wide range of pH between 2.5-11. Adsorption isotherm equations such as Freundlich, and Langmuir were applied to calculate the values of respective constants. Adsorption study was found that the currently introduced adsorbent can be used to remove cationic dyes such as methylene blue from aqueous solutions.

Keywords: beet red root, removal of deys, methylene blue, adsorption

Procedia PDF Downloads 330
2456 Image Enhancement Algorithm of Photoacoustic Tomography Using Active Contour Filtering

Authors: Prasannakumar Palaniappan, Dong Ho Shin, Chul Gyu Song

Abstract:

The photoacoustic images are obtained from a custom developed linear array photoacoustic tomography system. The biological specimens are imitated by conducting phantom tests in order to retrieve a fully functional photoacoustic image. The acquired image undergoes the active region based contour filtering to remove the noise and accurately segment the object area for further processing. The universal back projection method is used as the image reconstruction algorithm. The active contour filtering is analyzed by evaluating the signal to noise ratio and comparing it with the other filtering methods.

Keywords: contour filtering, linear array, photoacoustic tomography, universal back projection

Procedia PDF Downloads 397
2455 Application of the Discrete-Event Simulation When Optimizing of Business Processes in Trading Companies

Authors: Maxat Bokambayev, Bella Tussupova, Aisha Mamyrova, Erlan Izbasarov

Abstract:

Optimization of business processes in trading companies is reviewed in the report. There is the presentation of the “Wholesale Customer Order Handling Process” business process model applicable for small and medium businesses. It is proposed to apply the algorithm for automation of the customer order processing which will significantly reduce labor costs and time expenditures and increase the profitability of companies. An optimized business process is an element of the information system of accounting of spare parts trading network activity. The considered algorithm may find application in the trading industry as well.

Keywords: business processes, discrete-event simulation, management, trading industry

Procedia PDF Downloads 339
2454 AI Applications in Accounting: Transforming Finance with Technology

Authors: Alireza Karimi

Abstract:

Artificial Intelligence (AI) is reshaping various industries, and accounting is no exception. With the ability to process vast amounts of data quickly and accurately, AI is revolutionizing how financial professionals manage, analyze, and report financial information. In this article, we will explore the diverse applications of AI in accounting and its profound impact on the field. Automation of Repetitive Tasks: One of the most significant contributions of AI in accounting is automating repetitive tasks. AI-powered software can handle data entry, invoice processing, and reconciliation with minimal human intervention. This not only saves time but also reduces the risk of errors, leading to more accurate financial records. Pattern Recognition and Anomaly Detection: AI algorithms excel at pattern recognition. In accounting, this capability is leveraged to identify unusual patterns in financial data that might indicate fraud or errors. AI can swiftly detect discrepancies, enabling auditors and accountants to focus on resolving issues rather than hunting for them. Real-Time Financial Insights: AI-driven tools, using natural language processing and computer vision, can process documents faster than ever. This enables organizations to have real-time insights into their financial status, empowering decision-makers with up-to-date information for strategic planning. Fraud Detection and Prevention: AI is a powerful tool in the fight against financial fraud. It can analyze vast transaction datasets, flagging suspicious activities and reducing the likelihood of financial misconduct going unnoticed. This proactive approach safeguards a company's financial integrity. Enhanced Data Analysis and Forecasting: Machine learning, a subset of AI, is used for data analysis and forecasting. By examining historical financial data, AI models can provide forecasts and insights, aiding businesses in making informed financial decisions and optimizing their financial strategies. Artificial Intelligence is fundamentally transforming the accounting profession. From automating mundane tasks to enhancing data analysis and fraud detection, AI is making financial processes more efficient, accurate, and insightful. As AI continues to evolve, its role in accounting will only become more significant, offering accountants and finance professionals powerful tools to navigate the complexities of modern finance. Embracing AI in accounting is not just a trend; it's a necessity for staying competitive in the evolving financial landscape.

Keywords: artificial intelligence, accounting automation, financial analysis, fraud detection, machine learning in finance

Procedia PDF Downloads 59