Search results for: improved sparrow search algorithm
6695 The Use of Ontology Framework for Automation Digital Forensics Investigation
Authors: Ahmad Luthfi
Abstract:
One of the main goals of a computer forensic analyst is to determine the cause and effect of the acquisition of a digital evidence in order to obtain relevant information on the case is being handled. In order to get fast and accurate results, this paper will discuss the approach known as ontology framework. This model uses a structured hierarchy of layers that create connectivity between the variant and searching investigation of activity that a computer forensic analysis activities can be carried out automatically. There are two main layers are used, namely analysis tools and operating system. By using the concept of ontology, the second layer is automatically designed to help investigator to perform the acquisition of digital evidence. The methodology of automation approach of this research is by utilizing forward chaining where the system will perform a search against investigative steps and atomically structured in accordance with the rules of the ontology.Keywords: ontology, framework, automation, forensics
Procedia PDF Downloads 3426694 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks
Procedia PDF Downloads 1036693 Improved Benzene Selctivity for Methane Dehydroaromatization via Modifying the Zeolitic Pores by Dual Templating Approach
Authors: Deepti Mishra, K. K Pant, Xiu Song Zhao, Muxina Konarova
Abstract:
Catalytic transformation of simplest hydrocarbon methane into benzene and valuable chemicals over Mo/HZSM-5 has a great economic potential, however, it suffers serious hurdles due to the blockage in the micropores because of extensive coking at high temperature during methane dehydroaromatization (MDA). Under such conditions, it necessitates the design of micro/mesoporous ZSM-5, which has the advantages viz. uniform dispersibility of MoOx species, consequently the formation of active Mo sites in the micro/mesoporous channel and lower carbon deposition because of improved mass transfer rate within the hierarchical pores. In this study, we report a unique strategy to control the porous structures of ZSM-5 through a dual templating approach, utilizing C6 and C12 -surfactants as porogen. DFT studies were carried out to correlate the ZSM-5 framework development using the C6 and C12 surfactants with structure directing agent. The structural and morphological parameters of the synthesized ZSM-5 were explored in detail to determine the crystallinity, porosity, Si/Al ratio, particle shape, size, and acidic strength, which were further correlated with the physicochemical and catalytic properties of Mo modified HZSM-5 catalysts. After Mo incorporation, all the catalysts were tested for MDA reaction. From the activity test, it was observed that C6 surfactant-modified hierarchically porous Mo/HZSM-5(H) showed the highest benzene formation rate (1.5 μmol/gcat. s) and longer catalytic stability up to 270 min of reaction as compared to the conventional microporous Mo/HZSM-5(C). In contrary, C12 surfactant modified Mo/HZSM-5(D) is inferior towards MDA reaction (benzene formation rate: 0.5 μmol/gcat. s). We ascribed that the difference in MDA activity could be due to the hierarchically interconnected meso/microporous feature of Mo/HZSM-5(H) that precludes secondary reaction of coking from benzene and hence contributing substantial stability towards MDA reaction.Keywords: hierarchical pores, Mo/HZSM-5, methane dehydroaromatization, coke deposition
Procedia PDF Downloads 826692 Endocardial Ultrasound Segmentation using Level Set method
Authors: Daoudi Abdelaziz, Mahmoudi Saïd, Chikh Mohamed Amine
Abstract:
This paper presents a fully automatic segmentation method of the left ventricle at End Systolic (ES) and End Diastolic (ED) in the ultrasound images by means of an implicit deformable model (level set) based on Geodesic Active Contour model. A pre-processing Gaussian smoothing stage is applied to the image, which is essential for a good segmentation. Before the segmentation phase, we locate automatically the area of the left ventricle by using a detection approach based on the Hough Transform method. Consequently, the result obtained is used to automate the initialization of the level set model. This initial curve (zero level set) deforms to search the Endocardial border in the image. On the other hand, quantitative evaluation was performed on a data set composed of 15 subjects with a comparison to ground truth (manual segmentation).Keywords: level set method, transform Hough, Gaussian smoothing, left ventricle, ultrasound images.
Procedia PDF Downloads 4656691 Forensic Investigation Into the Variation of Geological Properties of Soils Bintulu, Sarawak
Authors: Jaithish John
Abstract:
In this paper a brief overview is provided of the developments in interdisciplinary knowledge exchange with use of soil and geological (earth) materials in the search for evidence. The aim is to provide background information on the role and value of understanding ‘earth materials’ from the crime scene through to microscopic scale investigations to support law enforcement agencies in solving criminal and environmental concerns and investigations. This involves the sampling, analysis, interpretation and explanation presentation of all these evidences. In this context, field and laboratory methods are highlighted for the controlled / referenced sample, alibi sample and questioned sample. The aim of forensic analyses of earth materials is to associate these samples taken from a questioned source to determine if there are similar and outstanding characteristics features of earth materials crucial to support the investigation to the questioned earth materials and compare it to the controlled / referenced sample and alibi samples.Keywords: soil, texture, grain, microscopy
Procedia PDF Downloads 846690 Analysis of Some Produced Inhibitors for Corrosion of J55 Steel in NaCl Solution Saturated with CO₂
Authors: Ambrish Singh
Abstract:
The corrosion inhibition performance of pyran (AP) and benzimidazole (BI) derivatives on J55 steel in 3.5% NaCl solution saturated with CO₂ was investigated by electrochemical, weight loss, surface characterization, and theoretical studies. The electrochemical studies included electrochemical impedance spectroscopy (EIS), potentiodynamic polarization (PDP), electrochemical frequency modulation (EFM), and electrochemical frequency modulation trend (EFMT). Surface characterization was done using contact angle, scanning electron microscopy (SEM), and atomic force microscopy (AFM) techniques. DFT and molecular dynamics (MD) studies were done using Gaussian and Materials Studio softwares. All the studies suggested the good inhibition by the synthesized inhibitors on J55 steel in 3.5% NaCl solution saturated with CO₂ due to the formation of a protective film on the surface. Molecular dynamic simulation was applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface.Keywords: corrosion, inhibitor, EFM, AFM, DFT, MD
Procedia PDF Downloads 1056689 Experimental Investigation of the Thermal Performance of Fe2O3 under Magnetic Field in an Oscillating Heat Pipe
Authors: H. R. Goshayeshi, M. Khalouei, S. Azarberamman
Abstract:
This paper presents an experimental investigation regarding the use of Fe2O3 nano particles added to kerosene as a working fluid, under magnetic field. The experiment was made on Oscillating Heat Pipe (OHP). The experiment was performed in order to measure the temperature distribution and compare the heat transfer rate of the oscillating heat pipe with and without magnetic Field. Results showed that the addition of Fe2o3 nano particles under magnetic field improved thermal performance of OHP, compare with non-magnetic field. Furthermore applying a magnetic field enhance the heat transfer characteristic of Fe2O3 in both start up and steady state conditions. This paper presents an experimental investigation regarding the use of Fe2O3 nano particles added to kerosene as a working fluid, under magnetic field. The experiment was made on Oscillating Heat Pipe (OHP). The experiment was performed in order to measure the temperature distribution and compare the heat transfer rate of the oscillating heat pipe with and without magnetic Field. Results showed that the addition of Fe2o3 nano particles under magnetic field improved thermal performance of OHP, compare with non-magnetic field. Furthermore applying a magnetic field enhance the heat transfer characteristic of Fe2O3 in both start up and steady state conditions.Keywords: experimental, oscillating heat pipe, heat transfer, magnetic field
Procedia PDF Downloads 2646688 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 686687 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review
Authors: R. Christopher, C. Brandt, N. Damons
Abstract:
Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity
Procedia PDF Downloads 1796686 6G: Emerging Architectures, Technologies and Challenges
Authors: Abdulrahman Yarali
Abstract:
The advancement of technology never stops because the demands for improved internet and communication connectivity are increasing. Just as 5G networks are rolling out, the world has begun to talk about the sixth-generation networks (6G). The semantics of 6G are more or less the same as 5G networks because they strive to boost speeds, machine-to-machine (M2M) communication, and latency reduction. However, some of the distinctive focuses of 6G include the optimization of networks of machines through super speeds and innovative features. This paper discusses many aspects of the technologies, architectures, challenges, and opportunities of 6G wireless communication systems.Keywords: 6G, characteristics, infrastructures, technologies, AI, ML, IoT, applications
Procedia PDF Downloads 256685 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss
Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin
Abstract:
Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links
Procedia PDF Downloads 1286684 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 1866683 The Effect of Core Training on Physical Fitness Characteristics in Male Volleyball Players
Authors: Sibel Karacaoglu, Fatma Ç. Kayapinar
Abstract:
The aim of the study is to investigate the effect of the core training program on physical fitness characteristics and body composition in male volleyball players. 26 male university volleyball team players aged between 19 to 24 years who had no health problems and injury participated in the study. Subjects were divided into training (TG) and control groups (CG) as randomly. Data from twenty-one players who completed all training sessions were used for statistical analysis (TG,n=11; CG,n=10). A core training program was applied to the training group three days a week for 10 weeks. On the other hand, the control group did not receive any training. Before and after the 10-week training program, pre- and post-testing comprised of body composition measurements (weight, BMI, bioelectrical impedance analysis) and physical fitness measurements including flexibility (sit and reach test), muscle strength (back, leg and grip strength by dynamometer), muscle endurance (sit-ups and push-ups tests), power (one-legged jump and vertical jump tests), speed (20m sprint, 30m sprint) and balance tests (one-legged standing test) were performed. Changes of pre- and post- test values of the groups were determined by using dependent t test. According to the statistical analysis of data, no significant difference was found in terms of body composition in the both groups for pre- and post- test values. In the training group, all physical fitness measurements improved significantly after core training program (p<0.05) except 30m speed and handgrip strength (p>0.05). On the hand, only 20m speed test values improved after post-test period (p<0.05), but the other physical fitness tests values did not differ (p>0.05) between pre- and post- test measurement in the control group. The results of the study suggest that the core training program has positive effect on physical fitness characteristics in male volleyball players.Keywords: body composition, core training, physical fitness, volleyball
Procedia PDF Downloads 3466682 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity
Authors: Yuri Laevsky, Tatyana Nosova
Abstract:
The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation
Procedia PDF Downloads 3016681 Design of Nanoreinforced Polyacrylamide-Based Hybrid Hydrogels for Bone Tissue Engineering
Authors: Anuj Kumar, Kummara M. Rao, Sung S. Han
Abstract:
Bone tissue engineering has emerged as a potentially alternative method for localized bone defects or diseases, congenital deformation, and surgical reconstruction. The designing and the fabrication of the ideal scaffold is a great challenge, in restoring of the damaged bone tissues via cell attachment, proliferation, and differentiation under three-dimensional (3D) biological micro-/nano-environment. In this case, hydrogel system composed of high hydrophilic 3D polymeric-network that is able to mimic some of the functional physical and chemical properties of the extracellular matrix (ECM) and possibly may provide a suitable 3D micro-/nano-environment (i.e., resemblance of native bone tissues). Thus, this proposed hydrogel system is highly permeable and facilitates the transport of the nutrients and metabolites. However, the use of hydrogels in bone tissue engineering is limited because of their low mechanical properties (toughness and stiffness) that continue to posing challenges in designing and fabrication of tough and stiff hydrogels along with improved bioactive properties. For this purpose, in our lab, polyacrylamide-based hybrid hydrogels were synthesized by involving sodium alginate, cellulose nanocrystals and silica-based glass using one-step free-radical polymerization. The results showed good in vitro apatite-forming ability (biomineralization) and improved mechanical properties (under compression in the form of strength and stiffness in both wet and dry conditions), and in vitro osteoblastic (MC3T3-E1 cells) cytocompatibility. For in vitro cytocompatibility assessment, both qualitative (attachment and spreading of cells using FESEM) and quantitative (cell viability and proliferation using MTT assay) analyses were performed. The obtained hybrid hydrogels may potentially be used in bone tissue engineering applications after establishment of in vivo characterization.Keywords: bone tissue engineering, cellulose nanocrystals, hydrogels, polyacrylamide, sodium alginate
Procedia PDF Downloads 1516680 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance
Authors: Abdullah Al Farwan, Ya Zhang
Abstract:
In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance
Procedia PDF Downloads 1666679 Benefits of Whole-Body Vibration Training on Lower-Extremity Muscle Strength and Balance Control in Middle-Aged and Older Adults
Authors: Long-Shan Wu, Ming-Chen Ko, Chien-Chang Ho, Po-Fu Lee, Jenn-Woei Hsieh, Ching-Yu Tseng
Abstract:
This study aimed to determine the effects of whole-body vibration (WBV) training on lower-extremity muscle strength and balance control performance among community-dwelling middle-aged and older adults in the United States. Twenty-nine participants without any contraindication of performing WBV exercise completed all the study procedures. Participants were randomly assigned to do body weight exercise with either an individualized vibration frequency and amplitude, a fixed vibration frequency and amplitude, or no vibration. Isokinetic knee extensor power, limits of stability, and sit-to-stand tests were performed at the baseline and after 8 weeks of training. Neither the individualized frequency-amplitude WBV training protocol nor the fixed frequency-amplitude WBV training protocol improved isokinetic knee extensor power. The limits of stability endpoint excursion score for the individualized frequency-amplitude group increased by 8.8 (12.9%; p = 0.025) after training. No significant differences were observed in fixed and control group. The maximum excursion score for the individualized frequency-amplitude group at baseline increased by 9.2 (11.5%; p = 0.006) after training. The average weight transfer time score significantly decreased by 0.21 s in the fixed group. The participants in the individualized group showed a significant increase (3.2%) in weight rising index score after 8 weeks of WBV training. These results suggest that 8 weeks of WBV training improved limit of stability and sit-to-stand performance. Future studies need to determine whether WBV training improves other factors that can influence posture control.Keywords: whole-body vibration training, muscle strength, balance control, middle-aged and older adults
Procedia PDF Downloads 2236678 Transesterification of Jojoba Oil Wax Using Microwave Technique
Authors: Moataz Elsawy, Hala F. Naguib, Hilda A. Aziz, Eid A. Ismail, Labiba I. Hussein, Maher Z. Elsabee
Abstract:
Jojoba oil-wax is extracted from the seeds of the jojoba (Simmondsia chinensis Link Schneider), a perennial shrub that grows in semi-desert areas in Egypt and in some parts of the world. The main uses of jojoba oil wax are in the cosmetics and pharmaceutical industry, but new uses could arise related to the search of new energetic crops. This paper summarizes a process to convert the jojoba oil wax to biodiesel by transesterification with ethanol and a series of aliphatic alcohols using a more economic and energy saving method in a domestic microwave. The effect of time and power of the microwave on the extent of the transesterification using ethanol and other aliphatic alcohols has been studied. The separation of the alkyl esters from the fatty alcohols rich fraction has been done in a single crystallization step at low temperature (−18°C) from low boiling point petroleum ether. Gas chromatography has been used to follow up the transesterification process. All products have been characterized by spectral analysis.Keywords: jojoba oil, transesterification, microwave, gas chromatography jojoba esters, jojoba alcohol
Procedia PDF Downloads 4626677 Formation of in-situ Ceramic Phase in N220 Nano Carbon Containing Low Carbon Mgo-C Refractory
Authors: Satyananda Behera, Ritwik Sarkar
Abstract:
In iron and steel industries, MgO–C refractories are widely used in basic oxygen furnaces, electric arc furnaces and steel ladles due to their excellent corrosion resistance, thermal shock resistance, and other excellent hot properties. Conventionally magnesia carbon refractories contain about 8-20 wt% of carbon but the use of carbon is also associate with disadvantages like oxidation, low fracture strength, high heat loss and higher carbon pick up in steel. So, MgO-C refractory having low carbon content without compromising the beneficial properties is the challenge. Nano carbon, having finer particles, can mix and distribute within the entire matrix uniformly and can result in improved mechanical, thermo-mechanical, corrosion and other refractory properties. Previous experiences with the use of nano carbon in low carbon MgO-C refractory have indicated an optimum range of use of nano carbon around 1 wt%. This optimum nano carbon content was used in MgO-C compositions with flaky graphite followed by aluminum and silicon metal powder as an anti-oxidant. These low carbon MgO-C refractory compositions were prepared by conventional manufacturing techniques. At the same time 16 wt. % flaky graphite containing conventional MgO-C refractory was also prepared parallel under similar conditions. The developed products were characterized for various refractory related properties. Nano carbon containing compositions showed better mechanical, thermo-mechanical properties, and oxidation resistance compared to that of conventional composition. Improvement in the properties is associated with the formation of in-situ ceramic phase-like aluminum carbide, silicon carbide, and magnesium aluminum spinel. Higher surface area and higher reactivity of N220 nano carbon black resulted in greater formation in-situ ceramic phases, even at a much lower amount. Nano carbon containing compositions were found to have improved properties in MgO-C refractories compared to that of the conventional ones at much lower total carbon content.Keywords: N220nano carbon black, refractory properties, conventionally manufacturing techniques, conventional magnesia carbon refractories
Procedia PDF Downloads 3676676 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 2346675 Modeling of Leaks Effects on Transient Dispersed Bubbly Flow
Authors: Mohand Kessal, Rachid Boucetta, Mourad Tikobaini, Mohammed Zamoum
Abstract:
Leakage problem of two-component fluids flow is modeled for a transient one-dimensional homogeneous bubbly flow and developed by taking into account the effect of a leak located at the middle point of the pipeline. The corresponding three conservation equations are numerically resolved by an improved characteristic method. The obtained results are explained and commented in terms of physical impact on the flow parameters.Keywords: fluid transients, pipelines leaks, method of characteristics, leakage problem
Procedia PDF Downloads 4796674 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 1156673 Supply Chain Control and Inventory Management in Garment Industry
Authors: Nisa Nur Duman, Sümeyya Kiliç
Abstract:
In global competition conditions, survival of the plants by obtaining competitive advantage relies on the effective usage of existing sources. By this way, the plants can minimize their costs without losing their quality. They also take advantage took advantage on their competitors and enlarge customer portfolio by increasing profit margins. Changing structure of market and customer demands also change the structure of the competition between companies. Furthermore, competition is not only between the companies. By this manner, supply chain and supply chain management get importance by considering company performances. Companies that want to survive, search the ways of decreasing costs and the ways of meeting customer expectations. One of the important tools for reaching these goals is inventory managemet. The best inventory management system is meeting the demands by considering plant goals.Keywords: Supply chain, inventory management, apparel sector, garment industry
Procedia PDF Downloads 3706672 Linguistic Cyberbullying, a Legislative Approach
Authors: Simona Maria Ignat
Abstract:
Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter
Procedia PDF Downloads 866671 Cup-Cage Construct for Treatment of Severe Acetabular Bone Loss in Revision Total Hip Arthroplasty: Midterm Clinical and Radiographic Outcomes
Authors: Faran Chaudhry, Anser Daud, Doris Braunstein, Oleg Safir, Allan Gross, Paul Kuzyk
Abstract:
Background: Acetabular reconstruction in the context of massive acetabular bone loss is challenging. In rare scenarios where the extent of bone loss precludes shell placement (cup-cage), reconstruction at our center consisted of a cage combined with highly porous metal augments. This study evaluates survivorship, complications, and functional outcomes using this technique. Methods: A total of 131 cup-cage implants (129 patients) were included in our retrospective review of revisions of total hip arthroplasty from January 2003 to January 2022. Among these cases, 100/131 (76.3%) were women, the mean age at surgery time was 68.7 years (range, 29.0 to 92.0; SD, 12.4), and the mean follow-up was 7.7 years (range, 0.02 to 20.3; SD, 5.1). Kaplan-Meier survivorship analysis was conducted with failure defined as revision surgery and/or failure of the cup-cage reconstruction. Results: A total of 30 implants (23%) reached the study endpoint involving all-cause revision. Overall survivorship was 74.8% at 10 years and 69.8% at 15 years. Reasons for revision included infection 12/131 (9.1%), dislocation 10/131 (7.6%), aseptic loosening of cup and/or cage 5/131 (3.8%), and aseptic loosening of the femoral stem 2/131 (1.5%). The mean LLD improved from 12.2 ± 15.9 mm to 3.9 ± 11.8 (p<0.05). The horizontal and vertical hip centres on plain film radiographs were significantly improved (p<0.05). Functionally, there was a decrease in the number of patients requiring the use of gait aids, with fewer patients (34, 25.9%) using a cane, walker, or wheelchair post-operatively compared to pre-operatively (58, 44%). There was a significant increase in the number of independent ambulators from 24 to 47 (36%). Conclusion: The cup-cage construct is a reliable treatment option for the treatment of various acetabular defects. There are favourable survivorship, clinical and radiographic outcomes, with a satisfactory complication rate.Keywords: revision total hip arthroplasty, acetabular defect, pelvic discontinuity, trabecular metal augment, cup-cage
Procedia PDF Downloads 676670 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 616669 Mood Symptom Severity in Service Members with Posttraumatic Stress Symptoms after Service Dog Training
Authors: Tiffany Riggleman, Andrea Schultheis, Kalyn Jannace, Jerika Taylor, Michelle Nordstrom, Paul F. Pasquina
Abstract:
Introduction: Posttraumatic Stress (PTS) and Posttraumatic Stress Disorder (PTSD) remain significant problems for military and veteran communities. Symptoms of PTSD often include poor sleep, intrusive thoughts, difficulty concentrating, and trouble with emotional regulation. Unfortunately, despite its high prevalence, service members diagnosed with PTSD often do not seek help, usually because of the perceived stigma surrounding behavioral health care. To help address these challenges, non-pharmacological, therapeutic approaches are being developed to help improve care and enhance compliance. The Service Dog Training Program (SDTP), which involves teaching patients how to train puppies to become mobility service dogs, has been successfully implemented into PTS/PTSD care programs with anecdotal reports of improved outcomes. This study was designed to assess the biopsychosocial effects of SDTP from military beneficiaries with PTS symptoms. Methods: Individuals between the ages of 18 and 65 with PTS symptom were recruited to participate in this prospective study. Each subject completes 4 weeks of baseline testing, followed by 6 weeks of active service dog training (twice per week for one hour sessions) with a professional service dog trainer. Outcome measures included the Posttraumatic Stress Checklist for the DSM-5 (PCL-5), Generalized Anxiety Disorder questionnaire-7 (GAD-7), Patient Health Questionnaire-9 (PHQ-9), social support/interaction, anthropometrics, blood/serum biomarkers, and qualitative interviews. Preliminary analysis of 17 participants examined mean scores on the GAD-7, PCL-5, and PHQ-9, pre- and post-SDTP, and changes were assessed using Wilcoxon Signed-Rank tests. Results: Post-SDTP, there was a statistically significant mean decrease in PCL-5 scores of 13.5 on an 80-point scale (p=0.03) and a significant mean decrease of 2.2 in PHQ-9 scores on a 27 point scale (p=0.04), suggestive of decreased PTSD and depression symptoms. While there was a decrease in mean GAD-7 scores post-SDTP, the difference was not significant (p=0.20). Recurring themes among results from the qualitative interviews include decreased pain, forgetting about stressors, improved sense of calm, increased confidence, improved communication, and establishing a connection with the service dog. Conclusion: Preliminary results of the first 17 participants in this study suggest that individuals who received SDTP had a statistically significant decrease in PTS symptom, as measured by the PCL-5 and PHQ-9. This ongoing study seeks to enroll a total of 156 military beneficiaries with PTS symptoms. Future analyses will include additional psychological outcomes, pain scores, blood/serum biomarkers, and other measures of the social aspects of PTSD, such as relationship satisfaction and sleep hygiene.Keywords: post-concussive syndrome, posttraumatic stress, service dog, service dog training program, traumatic brain injury
Procedia PDF Downloads 1136668 Heavy Oil Recovery with Chemical Viscosity-Reduction: An Innovative Low-Carbon and Low-Cost Technology
Authors: Lin Meng, Xi Lu, Haibo Wang, Yong Song, Lili Cao, Wenfang Song, Yong Hu
Abstract:
China has abundant heavy oil resources, and thermal recovery is the main recovery method for heavy oil reservoirs. However, high energy consumption, high carbon emission and high production costs make heavy oil thermal recovery unsustainable. It is urgent to explore a replacement for developing technology. A low Carbon and cost technology of heavy oil recovery, chemical viscosity-reduction in layer (CVRL), is developed by the petroleum exploration and development research institute of Sinopec via investigated mechanisms, synthesized products, and improved oil production technologies, as follows: (1) Proposed a cascade viscous mechanism of heavy oil. Asphaltene and resin grow from free molecules to associative structures further to bulk aggregations by π - π stacking and hydrogen bonding, which causes the high viscosity of heavy oil. (2) Aimed at breaking the π - π stacking and hydrogen bond of heavy oil, the copolymer of N-(3,4-dihydroxyphenethyl) acryl amide and 2-Acrylamido-2-methylpropane sulfonic acid was synthesized as a viscosity reducer. It achieves a viscosity reduction rate of>80% without shearing for heavy oil (viscosity < 50000 mPa‧s), of which fluidity is evidently improved in the layer. (3) Synthesized hydroxymethyl acrylamide-maleic acid-decanol ternary copolymer self-assembly plugging agent. The particle size is 0.1 μm-2 mm adjustable, and the volume is 10-500 times controllable, which can achieve the efficient transportation of viscosity reducer to enriched oil areas. CVRL has applied 400 wells until now, increasing oil production by 470000 tons, saving 81000 tons of standard coal, reducing CO2 emissions by 174000 tons, and reducing production costs by 60%. It promotes the transformation of heavy oil towards low energy consumption, low carbon emissions, and low-cost development.Keywords: heavy oil, chemical viscosity-reduction, low carbon, viscosity reducer, plugging agent
Procedia PDF Downloads 736667 Understanding Health Behavior Using Social Network Analysis
Authors: Namrata Mishra
Abstract:
Health of a person plays a vital role in the collective health of his community and hence the well-being of the society as a whole. But, in today’s fast paced technology driven world, health issues are increasingly being associated with human behaviors – their lifestyle. Social networks have tremendous impact on the health behavior of individuals. Many researchers have used social network analysis to understand human behavior that implicates their social and economic environments. It would be interesting to use a similar analysis to understand human behaviors that have health implications. This paper focuses on concepts of those behavioural analyses that have health implications using social networks analysis and provides possible algorithmic approaches. The results of these approaches can be used by the governing authorities for rolling out health plans, benefits and take preventive measures, while the pharmaceutical companies can target specific markets, helping health insurance companies to better model their insurance plans.Keywords: breadth first search, directed graph, health behaviors, social network analysis
Procedia PDF Downloads 4716666 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi
Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza
Abstract:
Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making
Procedia PDF Downloads 84