Search results for: real time kernel preemption
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20464

Search results for: real time kernel preemption

17314 Effect of Incineration Temperatures to Time on the Rice Husk Ash (RHA) Silica Structure: A Comparative Study to the Literature with Experimental Work

Authors: Binyamien Ibrahim Rasoul

Abstract:

Controlled burning of rice husk can produce amorphous rice husk ash (RHA) with high silica content which can significantly enhance the properties of concrete. This study has been undertaken to investigate the relationship between the incineration temperatures and time to produce RHA with ultimate reactivity. The rice husk samples were incinerated in an electrical muffle furnace at 350°C, 400°C, 425°C 450°C, 475°C, and 500°C for 60 and 90 minutes, respectively. The silica structure in the Rice Husk Ash (RHA) was determined using X-Ray diffraction analysis, while chemical properties obtained using X-Ray Fluorescence. The results show that RHA appeared to be the totally amorphous when the husk incineration up to 425°C for 60 and even at 90 minutes. However, with increased temperature to 450°C, 475°C and 500°C, traces of crystalline silica (quartz) were detected. However, cannot be taken into account as it does not affect on the ash structure. In conclusion, the result gives an idea of the temperature and the time required to produce ash from rice husk with totally amorphous form.

Keywords: rice husk ash, silica, compressive strength, tensile strength, X-Ray diffraction, X-R florescence, pozzolanic activity

Procedia PDF Downloads 139
17313 A Fluorescent Polymeric Boron Sensor

Authors: Soner Cubuk, Mirgul Kosif, M. Vezir Kahraman, Ece Kok Yetimoglu

Abstract:

Boron is an essential trace element for the completion of the life circle for organisms. Suitable methods for the determination of boron have been proposed, including acid - base titrimetric, inductively coupled plasma emission spectroscopy flame atomic absorption and spectrophotometric. However, the above methods have some disadvantages such as long analysis times, requirement of corrosive media such as concentrated sulphuric acid and multi-step sample preparation requirements and time-consuming procedures. In this study, a selective and reusable fluorescent sensor for boron based on glycosyloxyethyl methacrylate was prepared by photopolymerization. The response characteristics such as response time, pH, linear range, limit of detection were systematically investigated. The excitation/emission maxima of the membrane were at 378/423 nm, respectively. The approximate response time was measured as 50 sec. In addition, sensor had a very low limit of detection which was 0.3 ppb. The sensor was successfully used for the determination of boron in water samples with satisfactory results.

Keywords: boron, fluorescence, photopolymerization, polymeric sensor

Procedia PDF Downloads 267
17312 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 45
17311 Online Information Seeking: A Review of the Literature in the Health Domain

Authors: Sharifah Sumayyah Engku Alwi, Masrah Azrifah Azmi Murad

Abstract:

The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking, i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems which exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.

Keywords: information behavior, information seeking, online health information, technology acceptance model, the theory of planned behavior, UTAUT

Procedia PDF Downloads 256
17310 An Efficient Algorithm of Time Step Control for Error Correction Method

Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim

Abstract:

The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.

Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points

Procedia PDF Downloads 554
17309 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 131
17308 Layout Optimization of a Start-up COVID-19 Testing Kit Manufacturing Facility

Authors: Poojan Vora, Hardik Pancholi, Sanket Tajane, Harsh Shah, Elias Keedy

Abstract:

The global COVID-19 pandemic has affected the industry drastically in many ways. Even though the vaccine is being distributed quickly and despite the decreasing number of positive cases, testing is projected to remain a key aspect of the ‘new normal’. Improving existing plant layout and improving safety within the facility are of great importance in today’s industries because of the need to ensure productivity optimization and reduce safety risks. In practice, it is essential for any manufacturing plant to reduce nonvalue adding steps such as the movement of materials and rearrange similar processes. In the current pandemic situation, optimized layouts will not only increase safety measures but also decrease the fixed cost per unit manufactured. In our case study, we carefully studied the existing layout and the manufacturing steps of a new Texas start-up company that manufactures COVID testing kits. The effects of production rate are incorporated with the computerized relative allocation of facilities technique (CRAFT) algorithm to improve the plant layout and estimate the optimization parameters. Our work reduces the company’s material handling time and increases their daily production. Real data from the company are used in the case study to highlight the importance of colleges in fostering small business needs and improving the collaboration between college researchers and industries by using existing models to advance best practices.

Keywords: computerized relative allocation of facilities technique, facilities planning, optimization, start-up business

Procedia PDF Downloads 126
17307 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 218
17306 Development of a Web-Based Application for Intelligent Fertilizer Management in Rice Cultivation

Authors: Hao-Wei Fu, Chung-Feng Kao

Abstract:

In the era of rapid technological advancement, information technology (IT) has become integral to modern life, exerting significant influence across diverse sectors and serving as a catalyst for development in various industries. Within agriculture, the integration of IT offers substantial benefits, notably enhancing operational efficiency. Real-time monitoring systems, for instance, have been widely embraced in agriculture, effectively improving crop management practices. This study specifically addresses the management of rice panicle fertilizer, presenting the development of a web application tailored to handle data associated with rice panicle fertilizer management. Leveraging the normalized difference red edge index, this application optimizes the quantity of rice panicle fertilizer used, providing recommendations to agricultural stakeholders and service providers in the agricultural information sector. The overarching objective is to minimize costs while maximizing yields. Furthermore, a robust database system has been established to store and manage relevant data for future reference in rice cultivation management. Additionally, the study utilizes the Representational State Transfer software architectural style to construct an application programming interface (API), facilitating data creation, retrieval, updating, and deletion for users via the HyperText Transfer Protocol methods. Future plans involve integrating this API with third-party services to incorporate it into larger frameworks, thus catering to the diverse requirements of various third-party services.

Keywords: application programming interface, HyperText Transfer Protocol, nitrogen fertilizer intelligent management, web-based application

Procedia PDF Downloads 42
17305 Preparation and Characterization of Nanocrystalline Cellulose from Acacia mangium

Authors: Samira Gharehkhani, Seyed Farid Seyed Shirazi, Abdolreza Gharehkhani, Hooman Yarmand, Ahmad Badarudin, Rushdan Ibrahim, Salim Newaz Kazi

Abstract:

Nanocrystalline cellulose (NCC) were prepared by acid hydrolysis and ultrasound treatment of bleached Acacia mangium fibers. The obtained rod-shaped nanocrystals showed a uniform size. The results showed that NCC with high crystallinity can be obtained using 64 wt% sulfuric acid. The effect of synthesis condition was investigated. Different reaction times were examined to produce the NCC and the results revealed that an optimum reaction time has to be used for preparing the NCC. Morphological investigation was performed using the transmission electron microscopy (TEM). Fourier transform infrared (FTIR) spectroscopy and thermogravimetric analysis (TGA) were performed. X-ray diffraction (XRD) analysis revealed that the crystallinity increased with successive treatments. The NCC suspension was homogeneous and stable and no sedimentation was observed for a long time.

Keywords: acid hydrolysis, nanocrystalline cellulose, nano material, reaction time

Procedia PDF Downloads 491
17304 Parathyroid Hormone Receptor 1 as a Prognostic Indicator in Canine Osteosarcoma

Authors: Awf A. Al-Khan, Michael J. Day, Judith Nimmo, Mourad Tayebi, Stewart D. Ryan, Samantha J. Richardson, Janine A. Danks

Abstract:

Osteosarcoma (OS) is the most common type of malignant primary bone tumour in dogs. In addition to their critical roles in bone formation and remodeling, parathyroid hormone-related protein (PTHrP) and its receptor (PTHR1) are involved in progression and metastasis of many types of tumours in humans. The aims of this study were to determine the localisation and expression levels of PTHrP and PTHR1 in canine OS tissues using immunohistochemistry and to investigate if this expression is correlated with survival time. Formalin-fixed, paraffin-embedded tissue samples from 44 dogs with known survival time that had been diagnosed with primary osteosarcoma were analysed for localisation of PTHrP and PTHR1. Findings showed that both PTHrP and PTHR1 were present in all OS samples. The dogs with high level of PTHR1 protein (16%) had decreased survival time (P<0.05) compared to dogs with less PTHR1 protein. PTHrP levels did not correlate with survival time (P>0.05). The results of this study indicate that the PTHR1 is expressed differently in canine OS tissues and this may be correlated with poor prognosis. This may mean that PTHR1 may be useful as a prognostic indicator in canine OS and could represent a good therapeutic target in OS.

Keywords: dog, expression, osteosarcoma, parathyroid hormone receptor 1 (PTHR1), parathyroid hormone-related protein (PTHrP), survival

Procedia PDF Downloads 259
17303 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic

Authors: Aneta Oblouková, Eva Vítková

Abstract:

The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in the years 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research was obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in the years 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for the years 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in the years 2019-2021 were also calculated using a chosen method -a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.

Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate

Procedia PDF Downloads 111
17302 Neighborhood Relations in a Context of Cultural and Social Diversity - Qualitative Analysis of a Case Study in a Territory in the inner City of Lisbon

Authors: Madalena Corte-real, João Pedro Nunes, Bernardo Fernandes, Ana Jorge Correira

Abstract:

This presentation looks, from a sociological perspective, at neighboring practices in the inner city of Lisbon. The capital of Portugal, with half a million inhabitants, inserted in a metropolitan area with almost 2,9 million people, has been in the international spotlight seen as an interesting city to live in and to invest in, especially in the real estate market. This promotion emerged in the context of the financial crisis, where local authorities aimed to make Lisbon a more competitive city, calling for visitors and financial and human capital. Especially in the last decade, Portugal’s capital has been experiencing a significant increase in terms of migration from creative and entrepreneurial exiles to economic and political expats. In this context, the territory under analysis, in particular, is a mixed-used area undergoing rapid transformations in recent years marked by the presence of newcomers and non-nationals as well as social and cultural heterogeneity. It is next to one of the main arteries, considered the most multicultural part of the city, and presented in the press as one of the coolest neighborhoods in Europe. In view of these aspects, this research aims to address key-topics in current urban research: anonymity often related to big cities, socio-spatial attachment to the neighborhood, and the effects of diversity in the everyday relations of residents and shopkeepers. This case-study intends to look at particularities in local regimes differently affected by growing mobility. Against a backdrop of unidimensional generalizations and a tendency to refer to central countries and global cities, it aims to discuss national and local specificities. In methodological terms, the project comprises essentially a qualitative approach that consists of direct observation techniques and ethnographic methods as well semi-structured interviews to residents and local stakeholders whose narratives are subject to content analysis. The paper starts with a characterization of the broader context of the city of Lisbon, followed by territorial specificities regarding socio-spatial development, namely the city’s and the inner-areas morphology as well as the population’s socioeconomic profile. Following the residents and stakeholders’ narratives and practices it will assess the perception and behaviors regarding the representation of the area, relationships and experiences, routines, and sociability. Results point to a significant presence of neighborhood relations and different forms of support, in particular, among the different groups – e.g., old long-time residents, middle-class families, global creative class, and communities of economic migrants. Fieldwork reveals low levels of place-attachment although some residents refer, presently, high levels of satisfaction. Engagement with living space, this case-study suggests, reveals the social construction and lived the experience of neighboring by different groups, but also the way different and contrasting visions and desires are articulated to the profound urban, cultural and political changes that permeate the area.

Keywords: diversity, lisbon, neighboring and neighborhood, place-attachment

Procedia PDF Downloads 82
17301 Culture, Consumption, and Markets of Aesthetics: A10-Year Literature Review

Authors: Chin-Hsiang Chu

Abstract:

This article review the literature in the field among the marketing and aesthetics, the current market and customer-oriented product sales, and gradually from the practical functionality, transformed into the visual appearance of the concept note and the importance of marketing experience substance 'economic Aesthetics' trend. How to introduce the concept of aesthetic and differentiate products have become an important content of marketing management in for an organization in marketing.In previous studies,marketing aesthetic related researches are rare.Therefore, the purpose of this study to explore the connection between aesthetics and marketing of the market economy, and aggregated content through literature review, trying to find related research implications for the management of marketing aesthetics, market-oriented and customer value and development of the product. In this study, the problem statement and background, the development of the theory of evolution, as well as methods and results of discovery stage, literature review was conducted to explore. The results found: (1) Study of Aesthetics will help deepen the shopping environment and service environment commonly understood. (2) the perceived value of products imported aesthetic, consumer willingness to buy, and even premium products will be more attractive. (3) marketing personnel for general marketing management with a high degree of aesthetic identity. (4) management in marketing aesthetics connotation, aesthetic characteristics of five elements is greatly valued by the real-time, complex, specificity, attract sexual and richness. (5) allows consumers to experience through the process due to stimulate the senses, the mind and thinking with the corporate brand or have a deeper link. Results of this study can be used as business in a competitive market, new product development and design of the guide.

Keywords: marketing aesthetics, aesthetics economic, aesthetic, experiential marketing

Procedia PDF Downloads 246
17300 COVID-19 Pandemic Influence on Toddlers and Preschoolers’ Screen Time

Authors: Juliana da Silva Cardoso, Cláudia Correia, Rita Gomes, Carolina Fraga, Inês Cascais, Sara Monteiro, Beatriz Teixeira, Sandra Ribeiro, Carolina Andrade, Cláudia Oliveira, Diana Gonzaga, Catarina Prior, Inês Vaz Matos

Abstract:

The average daily screen time (ST) has been increasing in children, even at young ages. This seems to be associated with a higher incidence of neurodevelopmental disorders, and as the time of exposure increases, the greater is the functional impact. This study aims to compare the daily ST of toddlers and preschoolers previously and during the COVID-19 pandemic. A questionnaire was applied by telephone to parents/caregivers of children between 1 and 5 years old, followed up at 4 primary care units belonging to the Group of Primary Health Care Centers of Western Porto, Portugal. 520 children were included: 52.9% male, mean age 39.4 ± 13.9 months. The mean age of first exposure to screens was 13.9 ± 8.0 months, and most of the children were exposed to more than one screen daily. Considering the WHO recommendations, before the COVID-19 pandemic, 385 (74.0%) and 408 (78.5%) children had excessive ST during the week and the weekend, respectively; during the lockdown, these values increased to 495 (95.2%) and 482 (92.7%). Maternal education and both the child's median age and the median age of first exposure to screens had a statistically significant association with excessive ST, with OR 0.2 (p = 0.03, CI 95% 0.07-0.86), OR 1.1 (p = 0.01, 95% CI 1.05-1.14) and OR 0.9 (p = 0.05, 95% CI 0. 87-0.98), respectively. Most children in this sample had a higher than recommended ST, which increased with the onset of the COVID-19 pandemic. These results are worrisome and point to the need for urgent intervention.

Keywords: COVID-19 pandemic, preschoolers, screen time, toddlers

Procedia PDF Downloads 190
17299 Improvement of Transient Voltage Response Using PSS-SVC Coordination Based on ANFIS-Algorithm in a Three-Bus Power System

Authors: I Made Ginarsa, Agung Budi Muljono, I Made Ari Nrartha

Abstract:

Transient voltage response appears in power system operation when an additional loading is forced to load bus of power systems. In this research, improvement of transient voltage response is done by using power system stabilizer-static var compensator (PSS-SVC) based on adaptive neuro-fuzzy inference system (ANFIS)-algorithm. The main function of the PSS is to add damping component to damp rotor oscillation through automatic voltage regulator (AVR) and excitation system. Learning process of the ANFIS is done by using off-line method where data learning that is used to train the ANFIS model are obtained by simulating the PSS-SVC conventional. The ANFIS model uses 7 Gaussian membership functions at two inputs and 49 rules at an output. Then, the ANFIS-PSS and ANFIS-SVC models are applied to power systems. Simulation result shows that the response of transient voltage is improved with settling time at the time of 4.25 s.

Keywords: improvement, transient voltage, PSS-SVC, ANFIS, settling time

Procedia PDF Downloads 557
17298 Formulation and Evaluation of Mouth Dissolving Tablet of Ketorolac Tromethamine by Using Natural Superdisintegrants

Authors: J. P. Lavande, A. V.Chandewar

Abstract:

Mouth dissolving tablet is the speedily growing and highly accepted drug delivery system. This study was aimed at development of Ketorolac Tromethamine mouth dissolving tablet (MDTs), which can disintegrate or dissolve rapidly once placed in the mouth. Conventional Ketorolac tromethamine tablet requires water to swallow it and has limitation like low disintegration rate, low solubility etc. Ketorolac Tromethamine mouth dissolving tablets (formulation) consist of super-disintegrate like Heat Modified Karaya Gum, Co-treated Heat Modified Agar & Filler microcrystalline cellulose (MCC). The tablets were evaluated for weight variation, friability, hardness, in vitro disintegration time, wetting time, in vitro drug release profile, content uniformity. The obtained results showed that low weight variation, good hardness, acceptable friability, fast wetting time. Tablets in all batches disintegrated within 15-50 sec. The formulation containing superdisintegrants namely heat modified karaya gum and heat modified agar showed better performance in disintegration and drug release profile.

Keywords: mouth dissolving tablet, Ketorolac tromethamine, disintegration time, heat modified karaya gum, co-treated heat modified agar

Procedia PDF Downloads 273
17297 Enhancing Patch Time Series Transformer with Wavelet Transform for Improved Stock Prediction

Authors: Cheng-yu Hsieh, Bo Zhang, Ahmed Hambaba

Abstract:

Stock market prediction has long been an area of interest for both expert analysts and investors, driven by its complexity and the noisy, volatile conditions it operates under. This research examines the efficacy of combining the Patch Time Series Transformer (PatchTST) with wavelet transforms, specifically focusing on Haar and Daubechies wavelets, in forecasting the adjusted closing price of the S&P 500 index for the following day. By comparing the performance of the augmented PatchTST models with traditional predictive models such as Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Transformers, this study highlights significant enhancements in prediction accuracy. The integration of the Daubechies wavelet with PatchTST notably excels, surpassing other configurations and conventional models in terms of Mean Absolute Error (MAE) and Mean Squared Error (MSE). The success of the PatchTST model paired with Daubechies wavelet is attributed to its superior capability in extracting detailed signal information and eliminating irrelevant noise, thus proving to be an effective approach for financial time series forecasting.

Keywords: deep learning, financial forecasting, stock market prediction, patch time series transformer, wavelet transform

Procedia PDF Downloads 25
17296 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 468
17295 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 517
17294 Implementation of the Quality Management System and Development of Organizational Learning: Case of Three Small and Medium-Sized Enterprises in Morocco

Authors: Abdelghani Boudiaf

Abstract:

The profusion of studies relating to the concept of organizational learning shows the importance that has been given to this concept in the management sciences. A few years ago, companies leaned towards ISO 9001 certification; this requires the implementation of the quality management system (QMS). In order for this objective to be achieved, companies must have a set of skills, which pushes them to develop learning through continuous training. The results of empirical research have shown that implementation of the QMS in the company promotes the development of learning. It should also be noted that several types of learning are developed in this sense. Given the nature of skills development is normative in the context of the quality demarche, companies are obliged to qualify and improve the skills of their human resources. Continuous training is the keystone to develop the necessary learning. To carry out continuous training, companies need to be able to identify their real needs by developing training plans based on well-defined engineering. The training process goes obviously through several stages. Initially, training has a general aspect, that is to say, it focuses on topics and actions of a general nature. Subsequently, this is done in a more targeted and more precise way to accompany the evolution of the QMS and also to make the changes decided each time (change of working method, change of practices, change of objectives, change of mentality, etc.). To answer our problematic we opted for the method of qualitative research. It should be noted that the case study method crosses several data collection techniques to explain and understand a phenomenon. Three cases of companies were studied as part of this research work using different data collection techniques related to this method.

Keywords: changing mentalities, continuing training, organizational learning, quality management system, skills development

Procedia PDF Downloads 100
17293 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 171
17292 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography

Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner

Abstract:

Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.

Keywords: CBCT, C-arm, reconstruction, trajectory optimization

Procedia PDF Downloads 124
17291 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial

Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa

Abstract:

Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.

Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation

Procedia PDF Downloads 70
17290 Positive Effect of Manipulated Virtual Kinematic Intervention in Individuals with Traumatic Stiff Shoulder: Pilot Study

Authors: Isabella Schwartz, Ori Safran, Naama Karniel, Michal Abel, Adina Berko, Martin Seyres, Tamir Tsoar, Sigal Portnoy

Abstract:

Virtual Reality allows to manipulate the patient’s perception, thereby providing a motivational addition to real-time biofeedback exercises. We aimed to test the effect of manipulated virtual kinematic intervention on measures of active and passive Range of Motion (ROM), pain, and disability level in individuals with traumatic stiff shoulder. In a double-blinded study, patients with stiff shoulder following proximal humerus fracture and non-operative treatment were randomly divided into a non-manipulated feedback group (NM-group; N=6) and a manipulated feedback group (M-group; N=7). The shoulder ROM, pain, and the Disabilities of the Arm, Shoulder and Hand (DASH) scores were tested at baseline and after the 6 sessions, during which the subjects performed shoulder flexion and abduction in front of a graphic visualization of the shoulder angle. The biofeedback provided to the NM-group was the actual shoulder angle and the feedback provided to the M-group was manipulated so that 10° were constantly subtracted from the actual angle detected by the motion capture system. The M-group showed greater improvement in the active flexion ROM, with median and interquartile range of 197.1 (140.5-425.0) compared to 142.5 (139.1-151.3) for the NM-group (p=.046). Also, the M-group showed greater improvement in the DASH scores, with median and interquartile range of 67.7 (52.8-86.2) compared to 89.7 (83.8-98.3) for the NM-group (p=.022). Manipulated intervention is beneficial in individuals with traumatic stiff shoulder and should be further tested for other populations with orthopedic injuries.

Keywords: virtual reality, biofeedback, shoulder pain, range of motion

Procedia PDF Downloads 112
17289 A Comparative Study of the Effects of Vibratory Stress Relief and Thermal Aging on the Residual Stress of Explosives Materials

Authors: Xuemei Yang, Xin Sun, Cheng Fu, Qiong Lan, Chao Han

Abstract:

Residual stresses, which can be produced during the manufacturing process of plastic bonded explosive (PBX), play an important role in weapon system security and reliability. Residual stresses can and do change in service. This paper mainly studies the influence of vibratory stress relief (VSR) and thermal aging on residual stress of explosives. Firstly, the residual stress relaxation of PBX via different physical condition of VSR, such as vibration time, amplitude and dynamic strain, were studied by drill-hole technique. The result indicated that the vibratory amplitude, time and dynamic strain had a significant influence on the residual stress relief of PBX. The rate of residual stress relief of PBX increases first and then decreases with the increase of dynamic strain, amplitude and time, because the activation energy is too small to make the PBX yield plastic deformation at first. Then the dynamic strain, time and amplitude exceed a certain threshold, the residual stress changes show the same rule and decrease sharply, this sharply drop of residual stress relief rate may have been caused by over vibration. Meanwhile, the comparison between VSR and thermal aging was also studied. The conclusion is that the reduction ratio of residual stress after VSR process with applicable vibratory parameters could be equivalent to 73% of thermal aging with 7 days. In addition, the density attenuation rate, mechanical property, and dimensional stability with 3 months after VSR process was almost the same compared with thermal aging. However, compared with traditional thermal aging, VSR only takes a very short time, which greatly improves the efficiency of aging treatment for explosive materials. Therefore, the VSR could be a potential alternative technique in the industry of residual stress relaxation of PBX explosives.

Keywords: explosives, residual stresses, thermal aging, vibratory stress relief, VSR

Procedia PDF Downloads 140
17288 Predictive Factors of Nasal Continuous Positive Airway Pressure (NCPAP) Therapy Success in Preterm Neonates with Hyaline Membrane Disease (HMD)

Authors: Novutry Siregar, Afdal, Emilzon Taslim

Abstract:

Hyaline Membrane Disease (HMD) is the main cause of respiratory failure in preterm neonates caused by surfactant deficiency. Nasal Continuous Positive Airway Pressure (NCPAP) is the therapy for HMD. The success of therapy is determined by gestational age, birth weight, HMD grade, time of NCAP administration, and time of breathing frequency recovery. The aim of this research is to identify the predictive factor of NCPAP therapy success in preterm neonates with HMD. This study used a cross-sectional design by using medical records of patients who were treated in the Perinatology of the Pediatric Department of Dr. M. Djamil Padang Central Hospital from January 2015 to December 2017. The samples were eighty-two neonates that were selected by using the total sampling technique. Data analysis was done by using the Chi-Square Test and the Multiple Logistic Regression Prediction Model. The results showed the success rate of NCPAP therapy reached 53.7%. Birth weight (p = 0.048, OR = 3.34 95% CI 1.01-11.07), HMD grade I (p = 0.018, OR = 4.95 CI 95% 1.31-18.68), HMD grade II (p = 0.044, OR = 5.52 95% CI 1.04-29.15), and time of breathing frequency recovery (p = 0,000, OR = 13.50 95% CI 3.58-50, 83) are the predictive factors of NCPAP therapy success in preterm neonates with HMD. The most significant predictive factor is the time of breathing frequency recovery.

Keywords: predictive factors, the success of therapy, NCPAP, preterm neonates, HMD

Procedia PDF Downloads 44
17287 Academic Success, Problem-Based Learning and the Middleman: The Community Voice

Authors: Isabel Medina, Mario Duran

Abstract:

Although Problem-based learning provides students with multiple opportunities for rigorous instructional experiences in which students are challenged to address problems in the community; there are still gaps in connecting community leaders to the PBL process. At a south Texas high school, community participation serves as an integral component of the PBL process. Problem-based learning (PBL) has recently gained momentum due to the increase in global communities that value collaboration and critical thinking. As an instructional approach, PBL engages high school students in meaningful learning experiences. Furthermore, PBL focuses on providing students with a connection to real-world situations that require effective peer collaboration. For PBL leaders, providing students with a meaningful process is as important as the final PBL outcome. To achieve this goal, STEM high school strategically created a space for community involvement to be woven within the PBL fabric. This study examines the impact community members had on PBL students attending a STEM high school in South Texas. At STEM High School, community members represent a support system that works through the PBL process to ensure students receive real-life mentoring from business and industry leaders situated in the community. A phenomenological study using a semi-structured approach was used to collect data about students’ perception of community involvement within the PBL process for one South Texas high school. In our proposed presentation, we will discuss how community involvement in the PBL process academically impacted the educational experience of high school students at STEM high school. We address the instructional concerns PBL critics have with the lack of direct instruction, by providing a representation of how STEM high school utilizes community members to assist in impacting the academic experience of students.

Keywords: phenomenological, STEM education, student engagement, community involvement

Procedia PDF Downloads 78
17286 Reed: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models

Authors: Bipasha Sen, Aditya Agarwal

Abstract:

Multilingual automatic speech recognition (ASR) system is a single entity capable of transcribing multiple languages sharing a common phone space. Performance of such a system is highly dependent on the compatibility of the languages. State of the art speech recognition systems are built using sequential architectures based on recurrent neural networks (RNN) limiting the computational parallelization in training. This poses a significant challenge in terms of time taken to bootstrap and validate the compatibility of multiple languages for building a robust multilingual system. Complex architectural choices based on self-attention networks are made to improve the parallelization thereby reducing the training time. In this work, we propose Reed, a simple system based on 1D convolutions which uses very short context to improve the training time. To improve the performance of our system, we use raw time-domain speech signals directly as input. This enables the convolutional layers to learn feature representations rather than relying on handcrafted features such as MFCC. We report improvement on training and inference times by atleast a factor of 4x and 7.4x respectively with comparable WERs against standard RNN based baseline systems on SpeechOcean's multilingual low resource dataset.

Keywords: convolutional neural networks, language compatibility, low resource languages, multilingual automatic speech recognition

Procedia PDF Downloads 104
17285 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 261