Search results for: content based image retrieval (CBIR)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33046

Search results for: content based image retrieval (CBIR)

31066 Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach

Authors: Abe Degale D., Cheng Jian

Abstract:

When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively.

Keywords: violence detection, faster RCNN, transfer learning and, surveillance video

Procedia PDF Downloads 77
31065 Safety Date Fruits for Human Being as Affected by Nitrogen Fertilization Applications in Egypt

Authors: A. M. Attalla, A. F. lbrahim, Laila Y. Mostaffa

Abstract:

This study was conducted during three seasons 2010, 2011 and 2012 on Zahhloul date palm cultivar grown in calcareous soil, Alexandria governorate, Egypt. The palms received recommended dose of mineral N only or plus different rates of organic N with or without bio fertilizer to study the effect of such treatments on date palm yield and fruit nitrate and nitrite content due to its negative influence on human, animal and environment. The obtained results clarified that all used treatments of organic and bio fertilizers were effective in improving date palm yield and decreased fruit content of NO2 and NO3 in comparison with 100 % mineral N. It was also noticed that combined treatments of 50 % mineral N + 50 % organic manure with bio fertilizer is the superior treatments for increasing the values of yield and decreasing its content of NO2 and NO3. Hence, it could be concluded that, minimizing the use of chemical nitrogen fertilizer to half of recommended dose through addition of 50 % mineral N + 50 % organic manure with bio fertilizer and also, the utilization of organic and bio fertilizers is considered as a promising alternative for chemical fertilizers to avoid pollution and reduce the costs of mineral fertilizers.

Keywords: organic and bio fertilizers, mineral fertilizer, nitrate, nitrite, zaghloul date palm cv

Procedia PDF Downloads 424
31064 Model Canvas and Process for Educational Game Design in Outcome-Based Education

Authors: Ratima Damkham, Natasha Dejdumrong, Priyakorn Pusawiro

Abstract:

This paper explored the solution in game design to help game designers in the educational game designing using digital educational game model canvas (DEGMC) and digital educational game form (DEGF) based on Outcome-based Education program. DEGMC and DEGF can help designers develop an overview of the game while designing and planning their own game. The way to clearly assess players’ ability from learning outcomes and support their game learning design is by using the tools. Designers can balance educational content and entertainment in designing a game by using the strategies of the Business Model Canvas and design the gameplay and players’ ability assessment from learning outcomes they need by referring to the Constructive Alignment. Furthermore, they can use their design plan in this research to write their Game Design Document (GDD). The success of the research was evaluated by four experts’ perspectives in the education and computer field. From the experiments, the canvas and form helped the game designers model their game according to the learning outcomes and analysis of their own game elements. This method can be a path to research an educational game design in the future.

Keywords: constructive alignment, constructivist theory, educational game, outcome-based education

Procedia PDF Downloads 330
31063 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images

Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez

Abstract:

Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.

Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking

Procedia PDF Downloads 84
31062 The Study of Spray Drying Process for Skimmed Coconut Milk

Authors: Jaruwan Duangchuen, Siwalak Pathaveerat

Abstract:

Coconut (Cocos nucifera) belongs to the family Arecaceae. Coconut juice and meat are consumed as food and dessert in several regions of the world. Coconut juice contains low proteins, and arginine is the main amino acid content. Coconut meat is the endosperm of coconut that has nutritional value. It composes of carbohydrate, protein and fat. The objective of this study is utilization of by-products from the virgin coconut oil extraction process by using the skimmed coconut milk as a powder. The skimmed coconut milk was separated from the coconut milk in virgin coconut oil extraction process that consists approximately of protein 6.4%, carbohydrate 7.2%, dietary fiber 0.27 %, sugar 6.27%, fat 3.6 % and moisture content of 86.93%. This skimmed coconut milk can be made to powder for value - added product by using spray drying. The factors effect to the yield and properties of dry skimmed coconut milk in spraying process are inlet, outlet air temperature and the maltodextrin concentration. The percentage of maltodextrin content (15, 20%), outlet air temperature (80 ºC, 85 ºC, 90 ºC) and inlet air temperature (190 ºC, 200 ºC, 210 ºC) were conducted to the skimmed coconut milk spray drying process. The spray dryer was kept air flow rate (0.2698 m3 /s). The result that shown 2.22 -3.23% of moisture content, solubility, bulk density (0.4-0.67g/mL), solubility, wettability (4.04 -19.25 min) for solubility in the water, color, particle size were analyzed for the powder samples. The maximum yield (18.00%) of spray dried coconut milk powder was obtained at 210 °C of temperature, 80°C of outlet temperature and 20% maltodextrin for 27.27 second for drying time. For the amino analysis shown that the high amino acids are Glutamine (16.28%), Arginine (10.32%) and Glycerin (9.59%) by using HPLP method (UV detector).

Keywords: skimmed coconut milk, spray drying, virgin coconut oil process (VCO), maltodextrin

Procedia PDF Downloads 308
31061 Comparative Assessment of ISSR and RAPD Markers among Egyptian Jojoba Shrubs

Authors: Abdelsabour G. A. Khaled, Galal A.R. El-Sherbeny, Ahmed M. Hassanein, Gameel M. G. Aly

Abstract:

Classical methods of identification, based on agronomical characterization, are not always the most accurate way due to the instability of these characteristics under the influence of the different environments. In order to estimate the genetic diversity, molecular markers provided excellent tools. In this study, Genetic variation of nine Egyptian jojoba shrubs was tested using ISSR (inter simple sequences repeats), RAPD (random amplified polymorphic DNA) markers and based on the morphological characterization. The average of the percentage of polymorphism (%P) ranged between 58.17% and 74.07% for ISSR and RAPD markers, respectively. The range of genetic similarity percents among shrubs based on ISSR and RAPD markers were from 82.9 to 97.9% and from 85.5 to 97.8%, respectively. The average of PIC (polymorphism information content) values were 0.19 (ISSR) and 0.24 (RAPD). In the present study, RAPD markers were more efficient than the ISSR markers. Where the RAPD technique exhibited higher marker index (MI) average (1.26) compared to ISSR one (1.11). There was an insignificant correlation between the ISSR and RAPD data (0.076, P > 0.05). The dendrogram constructed by the combined RAPD and ISSR data gave a relatively different clustering pattern.

Keywords: correlation, molecular markers, polymorphism, marker index

Procedia PDF Downloads 460
31060 Opportunity Integrated Assessment Facilitating Critical Thinking and Science Process Skills Measurement on Acid Base Matter

Authors: Anggi Ristiyana Puspita Sari, Suyanta

Abstract:

To recognize the importance of the development of critical thinking and science process skills, the instrument should give attention to the characteristics of chemistry. Therefore, constructing an accurate instrument for measuring those skills is important. However, the integrated instrument assessment is limited in number. The purpose of this study is to validate an integrated assessment instrument for measuring students’ critical thinking and science process skills on acid base matter. The development model of the test instrument adapted McIntire model. The sample consisted of 392 second grade high school students in the academic year of 2015/2016 in Yogyakarta. Exploratory factor analysis (EFA) was conducted to explore construct validity, whereas content validity was substantiated by Aiken’s formula. The result shows that the KMO test is 0.714 which indicates sufficient items for each factor and the Bartlett test is significant (a significance value of less than 0.05). Furthermore, content validity coefficient which is based on 8 expert judgments is obtained at 0.85. The findings support the integrated assessment instrument to measure critical thinking and science process skills on acid base matter.

Keywords: acid base matter, critical thinking skills, integrated assessment instrument, science process skills, validity

Procedia PDF Downloads 304
31059 RGB Color Based Real Time Traffic Sign Detection and Feature Extraction System

Authors: Kay Thinzar Phu, Lwin Lwin Oo

Abstract:

In an intelligent transport system and advanced driver assistance system, the developing of real-time traffic sign detection and recognition (TSDR) system plays an important part in recent research field. There are many challenges for developing real-time TSDR system due to motion artifacts, variable lighting and weather conditions and situations of traffic signs. Researchers have already proposed various methods to minimize the challenges problem. The aim of the proposed research is to develop an efficient and effective TSDR in real time. This system proposes an adaptive thresholding method based on RGB color for traffic signs detection and new features for traffic signs recognition. In this system, the RGB color thresholding is used to detect the blue and yellow color traffic signs regions. The system performs the shape identify to decide whether the output candidate region is traffic sign or not. Lastly, new features such as termination points, bifurcation points, and 90’ angles are extracted from validated image. This system uses Myanmar Traffic Sign dataset.

Keywords: adaptive thresholding based on RGB color, blue color detection, feature extraction, yellow color detection

Procedia PDF Downloads 284
31058 Visual Template Detection and Compositional Automatic Regular Expression Generation for Business Invoice Extraction

Authors: Anthony Proschka, Deepak Mishra, Merlyn Ramanan, Zurab Baratashvili

Abstract:

Small and medium-sized businesses receive over 160 billion invoices every year. Since these documents exhibit many subtle differences in layout and text, extracting structured fields such as sender name, amount, and VAT rate from them automatically is an open research question. In this paper, existing work in template-based document extraction is extended, and a system is devised that is able to reliably extract all required fields for up to 70% of all documents in the data set, more than any other previously reported method. The approaches are described for 1) detecting through visual features which template a given document belongs to, 2) automatically generating extraction rules for a given new template by composing regular expressions from multiple components, and 3) computing confidence scores that indicate the accuracy of the automatic extractions. The system can generate templates with as little as one training sample and only requires the ground truth field values instead of detailed annotations such as bounding boxes that are hard to obtain. The system is deployed and used inside a commercial accounting software.

Keywords: data mining, information retrieval, business, feature extraction, layout, business data processing, document handling, end-user trained information extraction, document archiving, scanned business documents, automated document processing, F1-measure, commercial accounting software

Procedia PDF Downloads 107
31057 Glaucoma Detection in Retinal Tomography Using the Vision Transformer

Authors: Sushish Baral, Pratibha Joshi, Yaman Maharjan

Abstract:

Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized.

Keywords: glaucoma, vision transformer, convolutional architectures, retinal fundus images, self-attention, deep learning

Procedia PDF Downloads 173
31056 Study on 3D FE Analysis on Normal and Osteoporosis Mouse Models Based on 3-Point Bending Tests

Authors: Tae-min Byun, Chang-soo Chon, Dong-hyun Seo, Han-sung Kim, Bum-mo Ahn, Hui-suk Yun, Cheolwoong Ko

Abstract:

In this study, a 3-point bending computational analysis of normal and osteoporosis mouse models was performed based on the Micro-CT image information of the femurs. The finite element analysis (FEA) found 1.68 N (normal group) and 1.39 N (osteoporosis group) in the average maximum force, and 4.32 N/mm (normal group) and 3.56 N/mm (osteoporosis group) in the average stiffness. In the comparison of the 3-point bending test results, the maximum force and the stiffness were different about 9.4 times in the normal group and about 11.2 times in the osteoporosis group. The difference between the analysis and the test was greatly significant and this result demonstrated improvement points of the material properties applied to the computational analysis of this study. For the next study, the material properties of the mouse femur will be supplemented through additional computational analysis and test.

Keywords: 3-point bending test, mouse, osteoporosis, FEA

Procedia PDF Downloads 332
31055 Demystifying Board Games for Teachers

Authors: Shilpa Sharma, Lakshmi Ganesh, Mantra Gurumurthy, Shweta Sharma

Abstract:

Board games provide affordances of 21st-century skills like collaboration, critical thinking, and strategy. Board games such as chess, Catan, Battleship, Scrabble, and Taboo can enhance learning in these areas. While board games are popular in informal child settings, their use in formal K-12 education is limited. To encourage teachers to incorporate board games, it's essential to grasp their perceptions and tailor professional development programs accordingly. This paper aims to explore teacher attitudes toward board games and propose interventions to motivate teachers to integrate and create board games in the classroom. A user study was conceived, designed, and administered with teachers (n=38) to understand their experience in playing board games and using board games in the classroom. Purposive sampling was employed as the questionnaire was floated to teacher groups that the authors were aware of. The teachers taught in K-12 affordable private schools. The majority of them had experience ranging from 2-5 years. The questionnaire consisted of questions on teacher perceptions and beliefs of board game usage in the classroom. From the responses, it was observed that ~90% of teachers, though they had experience of playing board games, rarely did it translate to using board games in the classroom. Additionally, it was observed that translating learning objectives to board game objectives is the key factor that teachers consider while using board games in the classroom. Based on the results from the questionnaire, a professional development workshop was co-designed with the objective of motivating teachers to design, create and use board games in the classroom. The workshop is based on the principles of gamification. This is to ensure that the teachers experience a board game in a learning context. Additionally, the workshop is based on the principles of andragogy, such as agency, pertinence, and relevance. The workshop will begin by modifying and reusing known board games in the learning context so that the teachers do not find it difficult and daunting. The intention is to verify the face validity and content validity of the workshop design, orchestration and content with experienced teacher development professionals and education researchers. The results from this study will be published in the full paper.

Keywords: board games, professional development, teacher motivation, teacher perception

Procedia PDF Downloads 88
31054 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision

Authors: Lianzhong Zhang, Chao Huang

Abstract:

Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.

Keywords: SAR, sea-land segmentation, deep learning, transformer

Procedia PDF Downloads 146
31053 Effect of Depth on Texture Features of Ultrasound Images

Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes

Abstract:

In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.

Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering

Procedia PDF Downloads 272
31052 Effect of Media Reputation on Financial Performance and Abnormal Returns of Corporate Social Responsibility Winner

Authors: Yu-Chen Wei, Dan-Leng Wang

Abstract:

This study examines whether the reputation from media press affect the financial performance and market abnormal returns around the announcement of corporate social responsibility (CSR) award in the Taiwan Stock Market. The differences between this study and prior literatures are that the media reputation of media coverage and net optimism are constructed by using content analyses. The empirical results show the corporation which won CSR awards could promote financial performance next year. The media coverage and net optimism related to CSR winner are higher than the non-CSR companies prior and after the CSR award is announced, and the differences are significant, but the difference would decrease when the day was closing to announcement. We propose that non-CSR companies may try to manipulate media press to increase the coverage and positive image received by investors compared to the CSR winners. The cumulative real returns and abnormal returns of CSR winners did not significantly higher than the non-CSR samples however the leading returns of CSR winners would higher after the award announcement two months. The comparisons of performances between CSR and non-CSR companies could be the consideration of portfolio management for mutual funds and investors.

Keywords: corporate social responsibility, financial performance, abnormal returns, media, reputation management

Procedia PDF Downloads 408
31051 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.

Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate

Procedia PDF Downloads 504
31050 Biography and Psychotherapy: Oral History Interviews with Psychotherapists

Authors: Barbara Papp

Abstract:

Purpose: This article aims to rethink the relationship between the trauma and the choice of professions. By studying a homogenous sample of respondents, it seeks answers to the following question: how did personal losses that were caused by historical upheavals motivate people to enter the helping professions. By becoming helping professionals, the respondents of the survey sought to handle both historical representation and self-representation. How did psychotherapists working in the second half of the 20th century (Kádár-era in Hungary) shape their course of life? How did their family members respond to their choice of career? What forces supported or hindered them? How did they become professional helpers? Methodology: When interviewing 40 psychotherapists, the interviewer used the oral history technique. In-depth interviews were made with a focus on motivation. First, the collected material was examined using traditional content analysis tools: searching for content patterns, applying a word frequency analysis, and identifying the connections between key events and key persons. Second, a narrative psychological content analysis (NarrCat) was made. Findings: Interconnections were established between attachment, family and historical traumas and career choices. The history of the mid-20th-century period was traumatic and full of losses for the families of most of the psychotherapists concerned. Those experiences may have considerably influenced their choice of career. Working as helping therapists, they could get the opportunity to revise their losses. Conclusion: The results revealed core components that play a role in the psychotherapists’ choice of career, and also emphasized the importance of post-traumatic growth.

Keywords: biography, identity, narrative psychological content analysis, psychotherapists, trauma

Procedia PDF Downloads 111
31049 Identification of Effective Factors on Marketing Performance Management in Iran’s Airports and Air Navigation Companies

Authors: Morteza Hamidpour, Kambeez Shahroudi

Abstract:

The aim of this research was to identify the factors affecting the measurement and management of marketing performance in Iran's airports and air navigation companies (Economics in Air and Airport Transport). This study was exploratory and used a qualitative content analysis technique. The study population consisted of university professors in the field of air transportation and senior airport managers, with 15 individuals selected as samples using snowball technique. Based on the results, 15 main indicators were identified for measuring the marketing performance of Iran's airports and air navigation companies. These indicators include airport staff, general and operational expenses, annual passenger reception capacity, number of counter receptions and passenger dispatches, airport runway length, airline companies' loyalty to using airport space and facilities, regional market share of transit and departure flights, claims and net profit (aviation and non-aviation). By keeping the input indicators constant, the output indicators can be improved, enhancing performance efficiency and consequently increasing the economic situation in air transportation.

Keywords: air transport economics, marketing performance management, marketing performance input factors, marketing performance intermediary factors, marketing performance output factors, content analysis

Procedia PDF Downloads 46
31048 Regression Model Evaluation on Depth Camera Data for Gaze Estimation

Authors: James Purnama, Riri Fitri Sari

Abstract:

We investigate the machine learning algorithm selection problem in the term of a depth image based eye gaze estimation, with respect to its essential difficulty in reducing the number of required training samples and duration time of training. Statistics based prediction accuracy are increasingly used to assess and evaluate prediction or estimation in gaze estimation. This article evaluates Root Mean Squared Error (RMSE) and R-Squared statistical analysis to assess machine learning methods on depth camera data for gaze estimation. There are 4 machines learning methods have been evaluated: Random Forest Regression, Regression Tree, Support Vector Machine (SVM), and Linear Regression. The experiment results show that the Random Forest Regression has the lowest RMSE and the highest R-Squared, which means that it is the best among other methods.

Keywords: gaze estimation, gaze tracking, eye tracking, kinect, regression model, orange python

Procedia PDF Downloads 521
31047 The Potential of Sentiment Analysis to Categorize Social Media Comments Using German Libraries

Authors: Felix Boehnisch, Alexander Lutz

Abstract:

Based on the number of users and the amount of content posted daily, Facebook is considered the largest social network in the world. This content includes images or text posts from companies but also private persons, which are also commented on by other users. However, it can sometimes be difficult for companies to keep track of all the posts and the reactions to them, especially when there are several posts a day that contain hundreds to thousands of comments. To facilitate this, the following paper deals with the possible applications of sentiment analysis to social media comments in order to be able to support the work in social media marketing. In a first step, post comments were divided into positive and negative by a subjective rating, then the same comments were checked for their polarity value by the two german python libraries TextBlobDE and SentiWS and also grouped into positive, negative, or even neutral. As a control, the subjective classifications were compared with the machine-generated ones by a confusion matrix, and relevant quality criteria were determined. The accuracy of both libraries was not really meaningful, with 60% to 66%. However, many words or sentences were not evaluated at all, so there seems to be room for optimization to possibly get more accurate results. In future studies, the use of these specific German libraries can be optimized to gain better insights by either applying them to stricter cleaned data or by adding a sentiment value to emojis, which have been removed from the comments in advance, as they are not contained in the libraries.

Keywords: Facebook, German libraries, polarity, sentiment analysis, social media comments

Procedia PDF Downloads 160
31046 Scale-Up Process for Phyllanthus niruri Enriched Extract by Supercritical Fluid Extraction

Authors: Norsyamimi Hassim, Masturah Markom

Abstract:

Supercritical fluid extraction (SFE) has been known as a sustainable and safe extraction technique for plant extraction due to the minimal usage of organic solvent. In this study, a scale-up process for the selected herbal plant (Phyllanthus niruri) was investigated by using supercritical carbon dioxide (SC-CO2) with food-grade (ethanol-water) cosolvent. The quantification of excess ethanol content in the final dry extracts was conducted to determine the safety of enriched extracts. The extraction yields obtained by scale-up SFE unit were not much different compared to the predicted extraction yields with an error of 2.92%. For component contents, the scale-up extracts showed comparable quality with laboratory-scale experiments. The final dry extract showed that the excess ethanol content was 1.56% g/g extract. The fish embryo toxicity test (FETT) on the zebrafish embryos showed no toxicity effects by the extract, where the LD50 value was found to be 505.71 µg/mL. Thus, it has been proven that SFE with food-grade cosolvent is a safe extraction technique for the production of bioactive compounds from P. niruri.

Keywords: scale-up, supercritical fluid extraction, enriched extract, toxicity, ethanol content

Procedia PDF Downloads 111
31045 Teaching Foreign Languages Across the Curriculum (FLAC): Hybrid French/English Courses and their Dual Impact on Interdisciplinarity and L2 Competency

Authors: M. Caporale

Abstract:

French Curricula across the US have recently suffered low enrollment and have experienced difficulties with retention, thus resulting in fewer students minoring and majoring in French and enrolling in upper-level classes. Successful undergraduate programs offer French courses with a strong cultural and interdisciplinary or multidisciplinary component. The World Language Curriculum in liberal arts colleges in America needs to take into account the cultural aspects of the language and encourage students to think critically about the country or countries they are studying. Limiting the critical inquiry to language or literature narrowly defined provides and incomplete and stagnant picture of France and the Francophone world in today's global community. This essay discusses the creation and implementation of a hybrid interdisciplinary L1/L2 course titled "Topics in Francophone Cinema" (subtitle "Francophone Women on Screen and Behind the Camera"). Content-based interdisciplinary courses undoubtedly increase the profile of French and Francophone cultural Studies by introducing students of other disciplines to fundamental questions relating to the French and Francophone cultures (in this case, women's rights in the Francophone world). At the same time, this study determines that through targeted reading and writing assignments, sustained aural exposure to L2 through film,and student participation in a one-credit supplementary weekly practicum (creative film writing workshop), significant advances in L2 competence are achieved with students' oral and written production levels evolving from Advanced Low to Advanced-mid, as defined by the ACFL guidelines. Use of differentiated assessment methods for L1/L2 and student learning outcomes for both groups will also be addressed.

Keywords: interdisciplinary, Francophone cultural studies, language competency, content-based

Procedia PDF Downloads 482
31044 The Use of Seashell by-Products in Pervious Concrete Pavers

Authors: Dang Hanh Nguyen, Nassim Sebaibi, Mohamed Boutouil, Lydia Leleyter, Fabienne Baraud

Abstract:

Pervious concrete is a green alternative to conventional pavements with minimal fine aggregate and a high void content. Pervious concrete allows water to infiltrate through the pavement, thereby reducing the runoff and the requirement for stormwater management systems. Seashell By-Products (SBP) are produced in an important quantity in France and are considered as waste. This work investigated to use SBP in pervious concrete and produce an even more environmentally friendly product, Pervious Concrete Pavers. The research methodology involved substituting the coarse aggregate in the previous concrete mix design with 20%, 40% and 60% SBP. The testing showed that pervious concrete containing less than 40% SBP had strengths, permeability and void content which are comparable to the pervious concrete containing with only natural aggregate. The samples that contained 40% SBP or higher had a significant loss in strength and an increase in permeability and a void content from the control mix pervious concrete. On the basis of the results in this research, it was found that the natural aggregate can be substituted by SBP without affecting the delicate balance of a pervious concrete mix. Additional, it is recommended that the optimum replacement percentage for SBP in pervious concrete is 40 % direct replacement of natural coarse aggregate while maintaining the structural performance and drainage capabilities of the pervious concrete.

Keywords: seashell by-products, pervious concrete pavers, permeability, mechanical strength

Procedia PDF Downloads 462
31043 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 102
31042 Community Radio Broadcasting in Phutthamonthon District, Nakhon Pathom, Thailand

Authors: Anchana Sooksomchitra

Abstract:

This study aims to explore and compare the current condition of community radio stations in Phutthamonthon district, Nakhon Pathom province, Thailand, as well as the challenges they are facing. Qualitative research tools including in-depth interviews, documentary analysis, focus group interviews, and observation are used to examine the content, programming, and management structure of three community radio stations currently in operation within the district. Research findings indicate that the management and operational approaches adopted by the two non-profit stations included in the study, Salaya Pattana and Voice of Dhamma, are more structured and effective than that of the for-profit Tune Radio. Salaya Pattana, backed by the Faculty of Engineering, Mahidol University, and the charity-funded Voice of Dhamma are comparatively free from political and commercial influence, and able to provide more relevant and consistent community-oriented content to meet the real demand of the audience. Tune Radio, on the other hand, has to rely solely on financial support from political factions and business groups, which heavily influence its content.

Keywords: radio broadcasting, programming, management, community radio, Thailand

Procedia PDF Downloads 329
31041 Evaluation of the Curricular Content Domain Related to Topics of Human Sexuality in Teachers of Public Elementary Schools

Authors: Ahmed Ali Asadi, Julio R. Martinez-Alvarado, Claudia V. Camacho-Guevara, J. Jesus Cabrales-Ruvalcaba, Julieta Y. Islas-Limon, Bertha M. Viñas-Velazquez

Abstract:

The transformation of education in Mexico incorporates human sexuality subjects in its study plans for elementary education level, leaving aside the training of teachers to educate on such topics. The objective of this study was to evaluate the curricular content domain related to human sexuality subjects of public elementary school teachers in Mexico. For this, a transversal descriptive-prospective study with a quantitative focus has been conducted. The population for this study consisted of 109 fifth and sixth-grade teachers from a school zone of the State Education System. It was found in the results that fifth-grade teachers got a low achievement level, sixth-grade teachers got a medium achievement level, while teachers who give classes on both grades obtained a high achievement level on domain of curricular subjects related to sexuality. Likewise, a relation of different variables with the participant’s achievement level is exposed.

Keywords: curricular content, evaluation, sexual education, teacher

Procedia PDF Downloads 276
31040 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets

Authors: Basiru Amuneni

Abstract:

Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.

Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality

Procedia PDF Downloads 67
31039 Teachers' Technological Pedagogical and Content Knowledge and Technology Integration in Teaching and Learning in a Small Island Developing State: A Concept Paper

Authors: Aminath Waseela, Vinesh Chandra, Shaun Nykvist,

Abstract:

The success of technology integration initiatives hinges on the knowledge and skills of teachers to effectively integrate technology in classroom teaching. Consequently, gaining an understanding of teachers' technology knowledge and its integration can provide useful insights on strategies that can be adopted to enhance teaching and learning, especially in developing country contexts where research is scant. This paper extends existing knowledge on teachers' use of technology by developing a conceptual framework that recognises how three key types of knowledge; content, pedagogy, technology, and their integration are at the crux of teachers' technology use while at the same time is amenable to empirical studies. Although the aforementioned knowledge is important for effective use of technology that can result in enhanced student engagement, literature on how this knowledge leads to effective technology use and enhanced student engagement is limited. Thus, this theoretical paper proposes a framework to explore teachers' knowledge through the lens of the Technological Pedagogical and Content Knowledge (TPACK); the integration of technology in classroom teaching through the Substitution Augmentation Modification and Redefinition (SAMR) model and how this affects students' learning through the Bloom's Digital Taxonomy (BDT) lens. Studies using this framework could inform the design of professional development to support teachers to develop skills for effective use of available technology that can enhance student learning engagement.

Keywords: information and communication technology, ICT, in-service training, small island developing states, SIDS, student engagement, technology integration, technology professional development training, technological pedagogical and content knowledge, TPACK

Procedia PDF Downloads 125
31038 Explaining Irregularity in Music by Entropy and Information Content

Authors: Lorena Mihelac, Janez Povh

Abstract:

In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.

Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM

Procedia PDF Downloads 113
31037 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 101