Search results for: sequential quadratic programming
579 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification
Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar
Abstract:
Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings
Procedia PDF Downloads 174578 Developing a Culturally Acceptable End of Life Survey (the VOICES-ESRD/Thai Questionnaire) for Evaluation Health Services Provision of Older Persons with End-Stage Renal Disease (ESRD) in Thailand
Authors: W. Pungchompoo, A. Richardson, L. Brindle
Abstract:
Background: The developing of a culturally acceptable end of life survey (the VOICES-ESRD/Thai questionnaire) is an essential instrument for evaluation health services provision of older persons with ESRD in Thailand. The focus of the questionnaire was on symptoms, symptom control and the health care needs of older people with ESRD who are managed without dialysis. Objective: The objective of this study was to develop and adapt VOICES to make it suitable for use in a population survey in Thailand. Methods: The mixed methods exploratory sequential design was focussed on modifying an instrument. Data collection: A cognitive interviewing technique was implemented, using two cycles of data collection with a sample of 10 bereaved carers and a prototype of the Thai VOICES questionnaire. Qualitative study was used to modify the developing a culturally acceptable end of life survey (the VOICES-ESRD/Thai questionnaire). Data analysis: The data were analysed by using content analysis. Results: The revisions to the prototype questionnaire were made. The results were used to adapt the VOICES questionnaire for use in a population-based survey with older ESRD patients in Thailand. Conclusions: A culturally specific questionnaire was generated during this second phase and issues with questionnaire design were rectified.Keywords: VOICES-ESRD/Thai questionnaire, cognitive interviewing, end of life survey, health services provision, older persons with ESRD
Procedia PDF Downloads 286577 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 44576 Direct Cost of Anesthesia in Traumatic Patients with Massive Bleeding: A Prospective Micro-Costing Study
Authors: Asamaporn Puetpaiboon, Sunisa Chatmongkolchart, Nalinee Kovitwanawong, Osaree Akaraborworn
Abstract:
Traumatic patients with massive bleeding require intensive resuscitation. The actual cost of anesthesia per case has never been clarified, so our study aimed to quantify the direct cost, and cost-to-charge ratio of anesthetic care in traumatic patients with intraoperative massive bleeding. This study was a prospective, observational, cost analysis study, conducted in Prince of Songkla University hospital, Thailand, with traumatic patients, of any mechanisms being recruited. Massive bleeding was defined as estimated blood loss of at least one blood volume in 24 hours, or a half of blood volume in 3 hours. The cost components were identified by the micro-costing method, and valued by the bottom-up approach. The direct cost was divided into 4 categories: the labor cost, the capital cost, the material cost and the cost of drugs. From September 2017 to August 2018, 10 patients with multiple injuries were included. Seven patients had motorcycle accidents, two patients fell from a height and another one was in a minibus accident. Two patients died on the operating table, and another two died within 48 hours. The median Sequential Organ Failure Assessment (SOFA) score was 8. The median intraoperative blood loss was 3,500 ml. The median direct cost, per case, was 250 United States Dollars (2017 exchange rate), and the cost-to-charge ratio was 0.53. In summary, the direct cost was nearly half of the hospital charge, for these traumatic patients with massive bleeding. However, our study did not analyze the indirect cost.Keywords: cost, cost-to-charge ratio, micro-costing, trauma
Procedia PDF Downloads 148575 Dynamic Thermal Modelling of a PEMFC-Type Fuel Cell
Authors: Marco Avila Lopez, Hasnae Ait-Douchi, Silvia De Los Santos, Badr Eddine Lebrouhi, Pamela Ramírez Vidal
Abstract:
In the context of the energy transition, fuel cell technology has emerged as a solution for harnessing hydrogen energy and mitigating greenhouse gas emissions. An in-depth study was conducted on a PEMFC-type fuel cell, with an initiation of an analysis of its operational principles and constituent components. Subsequently, the modelling of the fuel cell was undertaken using the Python programming language, encompassing both steady-state and transient regimes. In the case of the steady-state regime, the physical and electrochemical phenomena occurring within the fuel cell were modelled, with the assumption of uniform temperature throughout all cell compartments. Parametric identification was carried out, resulting in a remarkable mean error of only 1.62% when the model results were compared to experimental data documented in the literature. The dynamic model that was developed enabled the scrutiny of the fuel cell's response in terms of temperature and voltage under varying current conditions.Keywords: fuel cell, modelling, dynamic, thermal model, PEMFC
Procedia PDF Downloads 82574 Robust Design of a Ball Joint Considering Uncertainties
Authors: Bong-Su Sin, Jong-Kyu Kim, Se-Il Song, Kwon-Hee Lee
Abstract:
An automobile ball joint is a pivoting element used to allow rotational motion between the parts of the steering and suspension system. And it plays a role in smooth transmission of steering movement, also reduction in impact from the road surface. A ball joint is under various repeated loadings that may cause cracks and abrasion. This damages lead to safety problems of a car, as well as reducing the comfort of the driver's ride, and raise questions about the ball joint procedure and the whole durability of the suspension system. Accordingly, it is necessary to ensure the high durability and reliability of a ball joint. The structural responses of stiffness and pull-out strength were then calculated to check if the design satisfies the related requirements. The analysis was sequentially performed, following the caulking process. In this process, the deformation and stress results obtained from the analysis were saved. Sequential analysis has a strong advantage, in that it can be analyzed by considering the deformed shape and residual stress. The pull-out strength means the required force to pull the ball stud out from the ball joint assembly. The low pull-out strength can deteriorate the structural stability and safety performances. In this study, two design variables and two noise factors were set up. Two design variables were the diameter of a stud and the angle of a socket. And two noise factors were defined as the uncertainties of Young's modulus and yield stress of a seat. The DOE comprises 81 cases using these conditions. Robust design of a ball joint was performed using the DOE. The pull-out strength was generated from the uncertainties in the design variables and the design parameters. The purpose of robust design is to find the design with target response and smallest variation.Keywords: ball joint, pull-out strength, robust design, design of experiments
Procedia PDF Downloads 422573 Spatial Rank-Based High-Dimensional Monitoring through Random Projection
Authors: Chen Zhang, Nan Chen
Abstract:
High-dimensional process monitoring becomes increasingly important in many application domains, where usually the process distribution is unknown and much more complicated than the normal distribution, and the between-stream correlation can not be neglected. However, since the process dimension is generally much bigger than the reference sample size, most traditional nonparametric multivariate control charts fail in high-dimensional cases due to the curse of dimensionality. Furthermore, when the process goes out of control, the influenced variables are quite sparse compared with the whole dimension, which increases the detection difficulty. Targeting at these issues, this paper proposes a new nonparametric monitoring scheme for high-dimensional processes. This scheme first projects the high-dimensional process into several subprocesses using random projections for dimension reduction. Then, for every subprocess with the dimension much smaller than the reference sample size, a local nonparametric control chart is constructed based on the spatial rank test to detect changes in this subprocess. Finally, the results of all the local charts are fused together for decision. Furthermore, after an out-of-control (OC) alarm is triggered, a diagnostic framework is proposed. using the square-root LASSO. Numerical studies demonstrate that the chart has satisfactory detection power for sparse OC changes and robust performance for non-normally distributed data, The diagnostic framework is also effective to identify truly changed variables. Finally, a real-data example is presented to demonstrate the application of the proposed method.Keywords: random projection, high-dimensional process control, spatial rank, sequential change detection
Procedia PDF Downloads 299572 Evaluation of Liquid Fermentation Strategies to Obtain a Biofertilizer Based on Rhizobium sp.
Authors: Andres Diaz Garcia, Ana Maria Ceballos Rojas, Duvan Albeiro Millan Montano
Abstract:
This paper describes the initial technological development stages in the area of liquid fermentation required to reach the quantities of biomass of the biofertilizer microorganism Rhizobium sp. strain B02, for the application of the unitary stages downstream at laboratory scale. In the first stage, the adjustment and standardization of the fermentation process in conventional batch mode were carried out. In the second stage, various fed-batch and continuous fermentation strategies were evaluated in 10L-bioreactor in order to optimize the yields in concentration (Colony Forming Units/ml•h) and biomass (g/l•h), to make feasible the application of unit operations downstream of process. The growth kinetics, the evolution of dissolved oxygen and the pH profile generated in each of the strategies were monitored and used to make sequential adjustments. Once the fermentation was finished, the final concentration and viability of the obtained biomass were determined and performance parameters were calculated with the purpose of select the optimal operating conditions that significantly improved the baseline results. Under the conditions adjusted and standardized in batch mode, concentrations of 6.67E9 CFU/ml were reached after 27 hours of fermentation and a subsequent noticeable decrease was observed associated with a basification of the culture medium. By applying fed-batch and continuous strategies, significant increases in yields were achieved, but with similar concentration levels, which involved the design of several production scenarios based on the availability of equipment usage time and volume of required batch.Keywords: biofertilizer, liquid fermentation, Rhizobium sp., standardization of processes
Procedia PDF Downloads 177571 Quality Approaches for Mass-Produced Fashion: A Study in Malaysian Garment Manufacturing
Authors: N. J. M. Yusof, T. Sabir, J. McLoughlin
Abstract:
Garment manufacturing industry involves sequential processes that are subjected to uncontrollable variations. The industry depends on the skill of labour in handling the varieties of fabrics and accessories, machines, and also a complicated sewing operation. Due to these reasons, garment manufacturers created systems to monitor and control the product’s quality regularly by conducting quality approaches to minimize variation. The aims of this research were to ascertain the quality approaches deployed by Malaysian garment manufacturers in three key areas-quality systems and tools; quality control and types of inspection; sampling procedures chosen for garment inspection. The focus of this research also aimed to distinguish quality approaches used by companies that supplied the finished garments to both domestic and international markets. The feedback from each of company’s representatives was obtained using the online survey, which comprised of five sections and 44 questions on the organizational profile and quality approaches used in the garment industry. The results revealed that almost all companies had established their own mechanism of process control by conducting a series of quality inspection for daily production either it was formally been set up or vice versa. Quality inspection was the predominant quality control activity in the garment manufacturing and the level of complexity of these activities was substantially dictated by the customers. AQL-based sampling was utilized by companies dealing with the export market, whilst almost all the companies that only concentrated on the domestic market were comfortable using their own sampling procedures for garment inspection. This research provides an insight into the implementation of quality approaches that were perceived as important and useful in the garment manufacturing sector, which is truly labour-intensive.Keywords: garment manufacturing, quality approaches, quality control, inspection, Acceptance Quality Limit (AQL), sampling
Procedia PDF Downloads 445570 Proposal of a Model Supporting Decision-Making Based on Multi-Objective Optimization Analysis on Information Security Risk Treatment
Authors: Ritsuko Kawasaki (Aiba), Takeshi Hiromatsu
Abstract:
Management is required to understand all information security risks within an organization, and to make decisions on which information security risks should be treated in what level by allocating how much amount of cost. However, such decision-making is not usually easy, because various measures for risk treatment must be selected with the suitable application levels. In addition, some measures may have objectives conflicting with each other. It also makes the selection difficult. Moreover, risks generally have trends and it also should be considered in risk treatment. Therefore, this paper provides the extension of the model proposed in the previous study. The original model supports the selection of measures by applying a combination of weighted average method and goal programming method for multi-objective analysis to find an optimal solution. The extended model includes the notion of weights to the risks, and the larger weight means the priority of the risk.Keywords: information security risk treatment, selection of risk measures, risk acceptance, multi-objective optimization
Procedia PDF Downloads 462569 Low-Cost IoT System for Monitoring Ground Propagation Waves due to Construction and Traffic Activities to Nearby Construction
Authors: Lan Nguyen, Kien Le Tan, Bao Nguyen Pham Gia
Abstract:
Due to the high cost, specialized dynamic measurement devices for industrial lands are difficult for many colleges to equip for hands-on teaching. This study connects a dynamic measurement sensor and receiver utilizing an inexpensive Raspberry Pi 4 board, some 24-bit ADC circuits, a geophone vibration sensor, and embedded Python open-source programming. Gather and analyze signals for dynamic measuring, ground vibration monitoring, and structure vibration monitoring. The system may wirelessly communicate data to the computer and is set up as a communication node network, enabling real-time monitoring of background vibrations at various locations. The device can be utilized for a variety of dynamic measurement and monitoring tasks, including monitoring earthquake vibrations, ground vibrations from construction operations, traffic, and vibrations of building structures.Keywords: sensors, FFT, signal processing, real-time data monitoring, ground propagation wave, python, raspberry Pi 4
Procedia PDF Downloads 103568 Development of a Serial Signal Monitoring Program for Educational Purposes
Authors: Jungho Moon, Lae-Jeong Park
Abstract:
This paper introduces a signal monitoring program developed with a view to helping electrical engineering students get familiar with sensors with digital output. Because the output of digital sensors cannot be simply monitored by a measuring instrument such as an oscilloscope, students tend to have a hard time dealing with digital sensors. The monitoring program runs on a PC and communicates with an MCU that reads the output of digital sensors via an asynchronous communication interface. Receiving the sensor data from the MCU, the monitoring program shows time and/or frequency domain plots of the data in real time. In addition, the monitoring program provides a serial terminal that enables the user to exchange text information with the MCU while the received data is plotted. The user can easily observe the output of digital sensors and configure the digital sensors in real time, which helps students who do not have enough experiences with digital sensors. Though the monitoring program was programmed in the Matlab programming language, it runs without the Matlab since it was compiled as a standalone executable.Keywords: digital sensor, MATLAB, MCU, signal monitoring program
Procedia PDF Downloads 497567 Bio Ethanol Production From the Co-Mixture of Jatropha Carcus L. Kernel Cake and Rice Straw
Authors: Felix U. Asoiro, Daniel I. Eleazar, Peter O. Offor
Abstract:
As a result of increasing energy demands, research in bioethanol has increased in recent years all through the world, in abide to partially or totally replace renewable energy supplies. The first and third generation feedstocks used for biofuel production have fundamental drawbacks. Waste rice straw and cake from second generation feedstock like Jatropha curcas l. kernel (JC) is seen as non-food feedstock and promising candidates for the industrial production of bioethanol. In this study, JC and rice husk (RH) wastes were characterized for proximate composition. Bioethanol was produced from the residual polysaccharides present in rice husk (RH) and Jatropha seed cake by sequential hydrolytic and fermentative processes at varying mixing proportions (50 g JC/50 g RH, 100 g JC/10 g RH, 100 g JC/20 g RH, 100 g JC/50 g RH, 100 g JC/100 g RH, 100 g JC/200 g RH and 200 g JC/100 g RH) and particle sizes (0.25, 0.5 and 1.00 mm). Mixing proportions and particle size significantly affected both bioethanol yield and some bioethanol properties. Bioethanol yield (%) increased with an increase in particle size. The highest bioethanol (8.67%) was produced at a mixing proportion of 100 g JC/50g RH at 0.25 mm particle size. The bioethanol had the lowest values of specific gravity and density of 1.25 and 0.92 g cm-3 and the highest values of 1.57 and 0.97 g cm-3 respectively. The highest values of viscosity (4.64 cSt) were obtained with 200 g JC/100 g RH, at 1.00 mm particle size. The maximum flash point and cloud point values were 139.9 oC and 23.7oC (100 g JC/200 g RH) at 1 mm and 0.5 mm particle sizes respectively. The maximum pour point value recorded was 3.85oC (100 g JC/50 g RH) at 1 mm particle size. The paper concludes that bioethanol can be recovered from JC and RH wastes. JC and RH blending proportions as well as particle sizes are important factors in bioethanol production.Keywords: bioethanol, hydrolysis, Jatropha curcas l. kernel, rice husk, fermentation, proximate composition
Procedia PDF Downloads 97566 Cooperative Jamming for Implantable Medical Device Security
Authors: Kim Lytle, Tim Talty, Alan Michaels, Jeff Reed
Abstract:
Implantable medical devices (IMDs) are medically necessary devices embedded in the human body that monitor chronic disorders or automatically deliver therapies. Most IMDs have wireless capabilities that allow them to share data with an offboard programming device to help medical providers monitor the patient’s health while giving the patient more insight into their condition. However, serious security concerns have arisen as researchers demonstrated these devices could be hacked to obtain sensitive information or harm the patient. Cooperative jamming can be used to prevent privileged information leaks by maintaining an adequate signal-to-noise ratio at the intended receiver while minimizing signal power elsewhere. This paper uses ray tracing to demonstrate how a low number of friendly nodes abiding by Bluetooth Low Energy (BLE) transmission regulations can enhance IMD communication security in an office environment, which in turn may inform how companies and individuals can protect their proprietary and personal information.Keywords: implantable biomedical devices, communication system security, array signal processing, ray tracing
Procedia PDF Downloads 114565 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 327564 Integrated Intensity and Spatial Enhancement Technique for Color Images
Authors: Evan W. Krieger, Vijayan K. Asari, Saibabu Arigela
Abstract:
Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image.Keywords: dynamic range compression, multi-level Fourier features, nonlinear enhancement, super resolution
Procedia PDF Downloads 554563 A Scalable Media Job Framework for an Open Source Search Engine
Authors: Pooja Mishra, Chris Pollett
Abstract:
This paper explores efficient ways to implement various media-updating features like news aggregation, video conversion, and bulk email handling. All of these jobs share the property that they are periodic in nature, and they all benefit from being handled in a distributed fashion. The data for these jobs also often comes from a social or collaborative source. We isolate the class of periodic, one round map reduce jobs as a useful setting to describe and handle media updating tasks. As such tasks are simpler than general map reduce jobs, programming them in a general map reduce platform could easily become tedious. This paper presents a MediaUpdater module of the Yioop Open Source Search Engine Web Portal designed to handle such jobs via an extension of a PHP class. We describe how to implement various media-updating tasks in our system as well as experiments carried out using these implementations on an Amazon Web Services cluster.Keywords: distributed jobs framework, news aggregation, video conversion, email
Procedia PDF Downloads 299562 A Mathematical Model for a Two-Stage Assembly Flow-Shop Scheduling Problem with Batch Delivery System
Authors: Saeedeh Ahmadi Basir, Mohammad Mahdavi Mazdeh, Mohammad Namakshenas
Abstract:
Manufacturers often dispatch jobs in batches to reduce delivery costs. However, sending several jobs in batches can have a negative effect on other scheduling-related objective functions such as minimizing the number of tardy jobs which is often used to rate managers’ performance in many manufacturing environments. This paper aims to minimize the number of weighted tardy jobs and the sum of delivery costs of a two-stage assembly flow-shop problem in a batch delivery system. We present a mixed-integer linear programming (MILP) model to solve the problem. As this is an MILP model, the commercial solver (the CPLEX solver) is not guaranteed to find the optimal solution for large-size problems at a reasonable amount of time. We present several numerical examples to confirm the accuracy of the model.Keywords: scheduling, two-stage assembly flow-shop, tardy jobs, batched delivery system
Procedia PDF Downloads 461561 A Method for Reduction of Association Rules in Data Mining
Authors: Diego De Castro Rodrigues, Marcelo Lisboa Rocha, Daniela M. De Q. Trevisan, Marcos Dias Da Conceicao, Gabriel Rosa, Rommel M. Barbosa
Abstract:
The use of association rules algorithms within data mining is recognized as being of great value in the knowledge discovery in databases. Very often, the number of rules generated is high, sometimes even in databases with small volume, so the success in the analysis of results can be hampered by this quantity. The purpose of this research is to present a method for reducing the quantity of rules generated with association algorithms. Therefore, a computational algorithm was developed with the use of a Weka Application Programming Interface, which allows the execution of the method on different types of databases. After the development, tests were carried out on three types of databases: synthetic, model, and real. Efficient results were obtained in reducing the number of rules, where the worst case presented a gain of more than 50%, considering the concepts of support, confidence, and lift as measures. This study concluded that the proposed model is feasible and quite interesting, contributing to the analysis of the results of association rules generated from the use of algorithms.Keywords: data mining, association rules, rules reduction, artificial intelligence
Procedia PDF Downloads 162560 Transformer Design Optimization Using Artificial Intelligence Techniques
Authors: Zakir Husain
Abstract:
Main objective of a power transformer design optimization problem requires minimizing the total overall cost and/or mass of the winding and core material by satisfying all possible constraints obligatory by the standards and transformer user requirement. The constraints include appropriate limits on winding fill factor, temperature rise, efficiency, no-load current and voltage regulation. The design optimizations tasks are a constrained minimum cost and/or mass solution by optimally setting the parameters, geometry and require magnetic properties of the transformer. In this paper, present the above design problems have been formulated by using genetic algorithm (GA) and simulated annealing (SA) on the MATLAB platform. The importance of the presented approach is stems for two main features. First, proposed technique provides reliable and efficient solution for the problem of design optimization with several variables. Second, it guaranteed to obtained solution is global optimum. This paper includes a demonstration of the application of the genetic programming GP technique to transformer design.Keywords: optimization, power transformer, genetic algorithm (GA), simulated annealing technique (SA)
Procedia PDF Downloads 584559 Modeling and Simulation Frameworks for Cloud Computing Environment: A Critical Evaluation
Authors: Abul Bashar
Abstract:
The recent surge in the adoption of cloud computing systems by various organizations has brought forth the challenge of evaluating their performance. One of the major issues faced by the cloud service providers and customers is to assess the ability of cloud computing systems to provide the desired services in accordance to the QoS and SLA constraints. To this end, an opportunity exists to develop means to ensure that the desired performance levels of such systems are met under simulated environments. This will eventually minimize the service disruptions and performance degradation issues during the commissioning and operational phase of cloud computing infrastructure. However, it is observed that several simulators and modelers are available for simulating the cloud computing systems. Therefore, this paper presents a critical evaluation of the state-of-the-art modeling and simulation frameworks applicable to cloud computing systems. It compares the prominent simulation frameworks in terms of the API features, programming flexibility, operating system requirements, supported services, licensing needs and popularity. Subsequently, it provides recommendations regarding the choice of the most appropriate framework for researchers, administrators and managers of cloud computing systems.Keywords: cloud computing, modeling framework, performance evaluation, simulation tools
Procedia PDF Downloads 503558 Biodegradation of Direct Red 23 by Bacterial Consortium Isolated from Dye Contaminated Soil Using Sequential Air-lift Bioreactor
Authors: Lata Kumari Dhanesh Tiwary, Pradeep Kumar Mishra
Abstract:
The effluent coming from various industries such as textile, carpet, food, pharmaceutical and many other industries is big challenge due to its recalcitrant and xenobiotiocs in nature. Recently, biodegradation of dye wastewater through biological means was widely used due to eco-friendly and cost effective with the higher percentage of removal of dye from wastewater. The present study deals with the biodegradation and decolourization of Direct Red 23 dye using indigenously isolated bacterial consortium. The bacterial consortium was isolated from soil sample from dye contaminated site near a cluster of Carpet industries of Bhadohi, Uttar Pradesh, India. The bacterial strain formed consortia were identified and characterized by morphological, biochemical and 16S rRNA gene sequence analysis. The bacterial strain mainly Staphylococcus saprophyticus strain BHUSS X3 (KJ439576), Microbacterium sp. BHUMSp X4 (KJ740222) and Staphylococcus saprophyticus strain BHUSS X5 (KJ439576) were used as consortia for further studies of dye decolorization. Experimental investigations were made in a Sequencing Air- lift bioreactor using the synthetic solution of Direct Red 23 dye by optimizing various parameters for efficient degradation of dye. The effect of several operating parameters such as flow rate, pH, temperature, initial dye concentration and inoculums size on removal of dye was investigated. The efficiency of isolated bacterial consortia from dye contaminated area in Sequencing Air- lift Bioreactor with different concentration of dye between 100-1200 mg/l at different hydraulic rate (HRTs) 26h and 10h. The maximum percentage of dye decolourization 98% was achieved when operated at HRT of 26h. The percentage of decolourization of dye was confirmed by using UV-Vis spectrophotometer and HPLC.Keywords: carpet industry, bacterial consortia, sequencing air-lift bioreactor
Procedia PDF Downloads 339557 Generative AI: A Comparison of Conditional Tabular Generative Adversarial Networks and Conditional Tabular Generative Adversarial Networks with Gaussian Copula in Generating Synthetic Data with Synthetic Data Vault
Authors: Lakshmi Prayaga, Chandra Prayaga. Aaron Wade, Gopi Shankar Mallu, Harsha Satya Pola
Abstract:
Synthetic data generated by Generative Adversarial Networks and Autoencoders is becoming more common to combat the problem of insufficient data for research purposes. However, generating synthetic data is a tedious task requiring extensive mathematical and programming background. Open-source platforms such as the Synthetic Data Vault (SDV) and Mostly AI have offered a platform that is user-friendly and accessible to non-technical professionals to generate synthetic data to augment existing data for further analysis. The SDV also provides for additions to the generic GAN, such as the Gaussian copula. We present the results from two synthetic data sets (CTGAN data and CTGAN with Gaussian Copula) generated by the SDV and report the findings. The results indicate that the ROC and AUC curves for the data generated by adding the layer of Gaussian copula are much higher than the data generated by the CTGAN.Keywords: synthetic data generation, generative adversarial networks, conditional tabular GAN, Gaussian copula
Procedia PDF Downloads 84556 Characterisation of Fractions Extracted from Sorghum Byproducts
Authors: Prima Luna, Afroditi Chatzifragkou, Dimitris Charalampopoulos
Abstract:
Sorghum byproducts, namely bran, stalk, and panicle are examples of lignocellulosic biomass. These raw materials contain large amounts of polysaccharides, in particular hemicelluloses, celluloses, and lignins, which if efficiently extracted, can be utilised for the development of a range of added value products with potential applications in agriculture and food packaging sectors. The aim of this study was to characterise fractions extracted from sorghum bran and stalk with regards to their physicochemical properties that could determine their applicability as food-packaging materials. A sequential alkaline extraction was applied for the isolation of cellulosic, hemicellulosic and lignin fractions from sorghum stalk and bran. Lignin content, phenolic content and antioxidant capacity were also investigated in the case of the lignin fraction. Thermal analysis using differential scanning calorimetry (DSC) and X-Ray Diffraction (XRD) revealed that the glass transition temperature (Tg) of cellulose fraction of the stalk was ~78.33 oC at amorphous state (~65%) and water content of ~5%. In terms of hemicellulose, the Tg value of stalk was slightly lower compared to bran at amorphous state (~54%) and had less water content (~2%). It is evident that hemicelluloses generally showed a lower thermal stability compared to cellulose, probably due to their lack of crystallinity. Additionally, bran had higher arabinose-to-xylose ratio (0.82) than the stalk, a fact that indicated its low crystallinity. Furthermore, lignin fraction had Tg value of ~93 oC at amorphous state (~11%). Stalk-derived lignin fraction contained more phenolic compounds (mainly consisting of p-coumaric and ferulic acid) and had higher lignin content and antioxidant capacity compared to bran-derived lignin fraction.Keywords: alkaline extraction, bran, cellulose, hemicellulose, lignin, stalk
Procedia PDF Downloads 300555 Understanding Learning Styles of Hong Kong Tertiary Students for Engineering Education
Authors: K. M. Wong
Abstract:
Engineering education is crucial to technological innovation and advancement worldwide by generating young talents who are able to integrate scientific principles and design practical solutions for real-world problems. Graduates of engineering curriculums are expected to demonstrate an extensive set of learning outcomes as required in international accreditation agreements for engineering academic qualifications, such as the Washington Accord and the Sydney Accord. On the other hand, students have different learning preferences of receiving, processing and internalizing knowledge and skills. If the learning environment is advantageous to the learning styles of the students, there is a higher chance that the students can achieve the intended learning outcomes. With proper identification of the learning styles of the students, corresponding teaching strategies can then be developed for more effective learning. This research was an investigation of learning styles of tertiary students studying higher diploma programmes in Hong Kong. Data from over 200 students in engineering programmes were collected and analysed to identify the learning characteristics of students. A small-scale longitudinal study was then started to gather academic results of the students throughout their two-year engineering studies. Preliminary results suggested that the sample students were reflective, sensing, visual, and sequential learners. Observations from the analysed data not only provided valuable information for teachers to design more effective teaching strategies, but also provided data for further analysis with the students’ academic results. The results generated from the longitudinal study shed light on areas of improvement for more effective engineering curriculum design for better teaching and learning.Keywords: learning styles, learning characteristics, engineering education, vocational education, Hong Kong
Procedia PDF Downloads 265554 The Determinants of Co-Production for Value Co-Creation: Quadratic Effects
Authors: Li-Wei Wu, Chung-Yu Wang
Abstract:
Recently, interest has been generated in the search for a new reference framework for value creation that is centered on the co-creation process. Co-creation implies cooperative value creation between service firms and customers and requires the building of experiences as well as the resolution of problems through the combined effort of the parties in the relationship. For customers, values are always co-created through their participation in services. Customers can ultimately determine the value of the service in use. This new approach emphasizes that a customer’s participation in the service process is considered indispensable to value co-creation. An important feature of service in the context of exchange is co-production, which implies that a certain amount of participation is needed from customers to co-produce a service and hence co-create value. Co-production no doubt helps customers better understand and take charge of their own roles in the service process. Thus, this proposal is to encourage co-production, thus facilitating value co-creation of that is reflected in both customers and service firms. Four determinants of co-production are identified in this study, namely, commitment, trust, asset specificity, and decision-making uncertainty. Commitment is an essential dimension that directly results in successful cooperative behaviors. Trust helps establish a relational environment that is fundamental to cross-border cooperation. Asset specificity motivates co-production because this determinant may enhance return on asset investment. Decision-making uncertainty prompts customers to collaborate with service firms in making decisions. In other words, customers adjust their roles and are increasingly engaged in co-production when commitment, trust, asset specificity, and decision-making uncertainty are enhanced. Although studies have examined the preceding effects, to our best knowledge, none has empirically examined the simultaneous effects of all the curvilinear relationships in a single study. When these determinants are excessive, however, customers will not engage in co-production process. In brief, we suggest that the relationships of commitment, trust, asset specificity, and decision-making uncertainty with co-production are curvilinear or are inverse U-shaped. These new forms of curvilinear relationships have not been identified in existing literature on co-production; therefore, they complement extant linear approaches. Most importantly, we aim to consider both the bright and the dark sides of the determinants of co-production.Keywords: co-production, commitment, trust, asset specificity, decision-making uncertainty
Procedia PDF Downloads 188553 Nasopharyngeal Carriage of Streptococcus pneumoniae in Children under 5 Years of Age before Introduction of Pneumococcal Vaccine (PCV 10) in Urban and Rural Sindh
Authors: Muhammad Imran Nisar, Fyezah Jehan, Tauseef Akhund, Sadia Shakoor, Kanwal Nayani, Furqan Kabir, Asad Ali, Anita Zaidi
Abstract:
Pneumococcal Vaccine -10 (PCV 10) was included in the Expanded Program of immunization (EPI) in Sindh, Pakistan in February 2013. This study was carried out immediately before the introduction of PCV 10 to establish baseline pneumococcal carriage and prevalent serotypes in naso-pharynx of children 3-11 months of age in an urban and rural community in Sindh, Pakistan. An additional sample of children aged 12 to 59 months was drawn from the urban community. Nasopharyngeal specimens were collected from a random sample of children. Samples were processed in a central laboratory in Karachi. Pneumococci were cultured on 5% Sheep Blood Agar and serotyping was performed using CDC standardized sequential multiplex PCR assay on bacterial colonies. Serotypes were then categorized into vaccine (PCV-10 and PCV-13) type and non-vaccine types. A total of 670 children were enrolled. Carriage rate for pneumococcus based on culture positivity was 74% and 79.5 % in the infant group in Karachi and Matiari respectively. Carriage rate was 78.2% for children aged 12 to 59 months in Karachi. Proportion of PCV 10 serotypes in infants was 38.8% and 33.5% in Karachi and Matiari respectively. In the older age group in Karachi, the proportion was 30.6%. Most common serotypes were 6A, 6B, 23F, 19A and 18C. This survey establishes vaccine and non-vaccine serotype carriage rate in a vaccine-naïve pediatric population among rural and urban communities in Sindh province. Annually planned surveys in the same communities will inform change in carriage rate after the introduction and uptake of PCV 10 in these communities.Keywords: Naso-Pharyngeal carriage, Pakistan, PCV10, Pneumococcus
Procedia PDF Downloads 301552 Suicide, Help-Seeking and LGBT Youth: A Mixed Methods Study
Authors: Elizabeth McDermott, Elizabeth Hughes, Victoria Rawlings
Abstract:
Globally, suicide is the second leading cause of death among 15–29 year-olds. Young people who identify as lesbian, gay, bisexual and transgender (LGBT) have elevated rates of suicide and self-harm. Despite the increased risk, there is a paucity of research on LGBT help-seeking and suicidality. This is the first national study to investigate LGBT youth help-seeking for suicidal feelings and self-harm. We report on a UK sequential exploratory mixed method study that employed face-to-face and online methods in two stages. Stage one involved 29 online (n=15) and face-to-face (n=14) semi-structured interviews with LGBT youth aged under 25 years old. Stage two utilized an online LGBT youth questionnaire employing a community-based sampling strategy (n=789). We found across the sample that LGBT youth who self-harmed or felt suicidal were reluctant to seek help. Results indicated that participants were normalizing their emotional distress and only asked for help when they reached crisis point and were no longer coping. Those who self-harmed (p<0.001, OR=2.82), had attempted or planned suicide (p<0.05, OR=1.48), or had experience of abuse related to their sexuality or gender (p<0.01, OR=1.80), were most likely to seek help. There were a number of interconnecting reasons that contributed to participants’ problems accessing help. The most prominent of these were: negotiating norms in relation to sexuality, gender, mental health and age; being unable to talk about emotions, and coping and self-reliance. It is crucial that policies and practices that aim to prevent LGBT youth suicide recognize that norms and normalizing processes connected to sexual orientation and gender identity are additional difficulties that LGBT youth have accessing mental health support.Keywords: help-seeking, LGBT, suicide, youth
Procedia PDF Downloads 276551 Quantitative Proteome Analysis and Bioactivity Testing of New Zealand Honeybee Venom
Authors: Maryam Ghamsari, Mitchell Nye-Wood, Kelvin Wang, Angela Juhasz, Michelle Colgrave, Don Otter, Jun Lu, Nazimah Hamid, Thao T. Le
Abstract:
Bee venom, a complex mixture of peptides, proteins, enzymes, and other bioactive compounds, has been widely studied for its therapeutic application. This study investigated the proteins present in New Zealand (NZ) honeybee venom (BV) using bottom-up proteomics. Two sample digestion techniques, in-solution digestion and filter-aided sample preparation (FASP), were employed to obtain the optimal method for protein digestion. Sequential Window Acquisition of All Theoretical Mass Spectra (SWATH–MS) analysis was conducted to quantify the protein compositions of NZ BV and investigate variations in collection years. Our results revealed high protein content (158.12 µg/mL), with the FASP method yielding a larger number of identified proteins (125) than in-solution digestion (95). SWATH–MS indicated melittin and phospholipase A2 as the most abundant proteins. Significant variations in protein compositions across samples from different years (2018, 2019, 2021) were observed, with implications for venom's bioactivity. In vitro testing demonstrated immunomodulatory and antioxidant activities, with a viable range for cell growth established at 1.5-5 µg/mL. The study underscores the value of proteomic tools in characterizing bioactive compounds in bee venom, paving the way for deeper exploration into their therapeutic potentials. Further research is needed to fractionate the venom and elucidate the mechanisms of action for the identified bioactive components.Keywords: honeybee venom, proteomics, bioactivity, fractionation, swath-ms, melittin, phospholipase a2, new zealand, immunomodulatory, antioxidant
Procedia PDF Downloads 42550 Groupthink: The Dark Side of Team Cohesion
Authors: Farhad Eizakshiri
Abstract:
The potential for groupthink to explain the issues contributing to deterioration of decision-making ability within the unitary team and so to cause poor outcomes attracted a great deal of attention from a variety of disciplines, including psychology, social and organizational studies, political science, and others. Yet what remains unclear is how and why the team members’ strivings for unanimity and cohesion override their motivation to realistically appraise alternative courses of action. In this paper, the findings of a sequential explanatory mixed-methods research containing an experiment with thirty groups of three persons each and interviews with all experimental groups to investigate this issue is reported. The experiment sought to examine how individuals aggregate their views in order to reach a consensual group decision concerning the completion time of a task. The results indicated that groups made better estimates when they had no interaction between members in comparison with the situation that groups collectively agreed on time estimates. To understand the reasons, the qualitative data and informal observations collected during the task were analyzed through conversation analysis, thus leading to four reasons that caused teams to neglect divergent viewpoints and reduce the number of ideas being considered. Reasons found were the concurrence-seeking tendency, pressure on dissenters, self-censorship, and the illusion of invulnerability. It is suggested that understanding the dynamics behind the aforementioned reasons of groupthink will help project teams to avoid making premature group decisions by enhancing careful evaluation of available information and analysis of available decision alternatives and choices.Keywords: groupthink, group decision, cohesiveness, project teams, mixed-methods research
Procedia PDF Downloads 396