Search results for: improved Canny algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7823

Search results for: improved Canny algorithm

5633 Impact of Social Crisis on Property Market Performance and Evolving Strategy for Improved Property Transactions in Crisis Prone Environment: A Case Study of North Eastern Nigeria

Authors: Abdur Raheem, Ado Yakub

Abstract:

Urban violence in the form of ethnic and religious conflicts have been on the increase in many African cities in the recent years of which most of them are the result of intense and bitter competition for political power, the control of limited economic, social and environmental resources. In Nigeria, the emergence of the Boko Haram insurgency in most parts of the north eastern parts have ignited violence, bloodshed, refuge exodus and internal migration. Not only do the persistent attacks of the sect create widespread insecurity and fear, it has also stifled normal processes of trade and investments most especially real property investment which is acclaimed to accelerate the economic cycle, thus the need to evolve strategies for an improved property market in such areas. This paper, therefore, examines the impact of these social crisis on effective and efficient utilization of real properties as a resource towards the development of the economy, using a descriptive analysis approach where particular emphasis was based on trends in residential housing values; volume of estimated property transactions and real estate investment decisions by affected individuals. Findings indicate that social crisis in the affected areas have been a clog on the wheels of property development and investment as properties worth hundreds of millions have been destroyed thereby having great impact on property values. Based on these findings, recommendations were made to include the need to strategically continue investing in property during such times, the need for Nigerian government to establish an active conflict monitoring and management unit for prompt response, encourage community and neighbourhood policing to ameliorate security challenges in Nigeria.

Keywords: social crisis, property market, economy, resources, north-eastern Nigeria

Procedia PDF Downloads 318
5632 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 81
5631 Salient Beliefs regarding Alcohol Reduction and Cessation among Thai Teenagers

Authors: Panrapee Suttiwan, Rewadee Watakakosol Arunya Tuicomepee, Sakkaphat T. Ngamake

Abstract:

Alcohol consumption ranks among the top six of health-risk behaviors that lead to disability and death among Thai teenagers. Underage drinkers have higher health risks than their non-drinking peers do. This study, therefore, aimed to explore salient beliefs of Thai teenagers with alcohol reduction and cessation based on the Theory of Planned Behaviour theoretical framework. Participants were 225 high-school and vocational school students, most of whom (60.9%) consumed alcohol almost daily (5-6 times / week), and one-third of whom (33.8%) reported habitual moderate drink. The average age was 16.5 (SD = 0.9), and the average age of the first use of alcohol was 13.7 (SD = 2.2). Instrument was an open-ended questionnaire that elicited beliefs about having alcohol reduction / cessation in the past 12 months. Findings revealed salient benefit beliefs of alcohol reduction / cessation among the teens such as improved physical and mental health, accident and violence avoidance, less sexual risks, money and time saving, better academic performance, and improved relationships. In contrast, the teens identified several disadvantage beliefs such as deteriorating health, social awkwardness, lack of little fun, excitement, and experience, physical uneasiness, stress, and lack of self-confidence. Salient normative groups for alcohol reduction / cessation included parents, elder relatives, siblings, close friends, teachers, boy / girlfriends, and seniors / juniors at school. Situations influencing alcohol reduction / cessation included quarrels with boy / girlfriends, family conflicts, peer pressure, partying and socializing, festive holidays and anniversary celebration, and visiting entertainment places, etc. This study provides empirical evidence that help to identify normative attitudes towards alcohol reduction / cessation and may thus be an important knowledge for public health campaigns seeking to reduce alcohol consumption in this population.

Keywords: alcohol consumption reduction, cessation, salient belief, Thai teenagers

Procedia PDF Downloads 324
5630 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 269
5629 Ethyl Methane Sulfonate-Induced Dunaliella salina KU11 Mutants Affected for Growth Rate, Cell Accumulation and Biomass

Authors: Vongsathorn Ngampuak, Yutachai Chookaew, Wipawee Dejtisakdi

Abstract:

Dunaliella salina has great potential as a system for generating commercially valuable products, including beta-carotene, pharmaceuticals, and biofuels. Our goal is to improve this potential by enhancing growth rate and other properties of D. salina under optimal growth conditions. We used ethyl methane sulfonate (EMS) to generate random mutants in D. salina KU11, a strain classified in Thailand. In a preliminary experiment, we first treated D. salina cells with 0%, 0.8%, 1.0%, 1.2%, 1.44% and 1.66% EMS to generate a killing curve. After that, we randomly picked 30 candidates from approximately 300 isolated survivor colonies from the 1.44% EMS treatment (which permitted 30% survival) as an initial test of the mutant screen. Among the 30 survivor lines, we found that 2 strains (mutant #17 and #24) had significantly improved growth rates and cell number accumulation at stationary phase approximately up to 1.8 and 1.45 fold, respectively, 2 strains (mutant #6 and #23) had significantly decreased growth rates and cell number accumulation at stationary phase approximately down to 1.4 and 1.35 fold, respectively, while 26 of 30 lines had similar growth rates compared with the wild type control. We also analyzed cell size for each strain and found there was no significant difference comparing all mutants with the wild type. In addition, mutant #24 had shown an increase of biomass accumulation approximately 1.65 fold compared with the wild type strain on day 5 that was entering early stationary phase. From these preliminary results, it could be feasible to identify D. salina mutants with significant improved growth rate, cell accumulation and biomass production compared to the wild type for the further study; this makes it possible to improve this microorganism as a platform for biotechnology application.

Keywords: Dunaliella salina, ethyl methyl sulfonate, growth rate, biomass

Procedia PDF Downloads 236
5628 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 137
5627 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 122
5626 Effect of a new Released Bio Organic-Fertilizer in Improving Tomato Growth in Hydroponic System and Under Greenhouse

Authors: Zayneb Kthiri, Walid Hamada

Abstract:

The application of organic fertilizers is generally known to be useful to sustain soil fertility and plant growth, especially in poor soils, with less than 1% of organic matter, as it is very common in our Tunisian fields. Therefore, we focused on evaluating the effect of a new released liquid organic fertilizer named Solorga (with 5% of organic matter) compared to a reference product (Espartan: Kimitec, Spain) on tomato plant growth and physiology. Both fertilizers, derived from plant decomposition, were applied at an early stage in hydroponic system and under greenhouse. In hydroponic system, after 14 days of their application by root feeding, a significant difference was observed between treatments. Indeed, Solorga improved shoots and roots length, as well as the biomass respectively, by 45%, 27%, and 27.8% increase rate, while compared to control plants. However, Espartan induced less the measured parameters while compared to untreated control. Moreover, Solorga significantly increased the chlorophyll content by 42% compared to control and by 32% compared to Espartan. In the greenhouse, after 20 days of treatments, the results showed a significant effect of both fertilizers on SPAD index and the number of flowers blossom. Solorga increased the amount of chlorophyll present in the leaf by 7% compared to Espartan as well as the plant height under greenhouse. Moreover, the number of flowers blossom increased by 15% in plants treated with Solorga while compared to Espartan. Whereas, there is no notable difference between both organic fertilizers on the fruits blossom and the number of fruits per blossom. In conclusion, even though there is a difference in the organic matter between both fertilizers, Solorga improved better the plant growth in controlled conditions in hydroponic system while compared to Espartan. Altogether the obtained results are encouraging for the use of Solorga as a soil enriching source of organic matter to help plants to boost their growth and help them to overcome abiotic stresses linked to soil fertility.

Keywords: tomato, plant growth, organic fertilizer, hydroponic system, greenhouse

Procedia PDF Downloads 127
5625 Microencapsulation of Tuna Oil and Mentha Piperita Oil Mixture using Different Combinations of Wall Materials with Whey Protein Isolate

Authors: Amr Mohamed Bakry Ibrahim, Yingzhou Ni, Hao Cheng, Li Liang

Abstract:

Tuna oil (omega-3 oil) has become increasingly popular in the last ten years, because it is considered one of the treasures of food which has many beneficial health effects for the humans. Nevertheless, the susceptibility of omega-3 oils to oxidative deterioration, resulting in the formation of oxidation products, in addition to organoleptic problems including “fishy” flavors, have presented obstacles to the more widespread use of tuna oils in the food industry. This study sought to evaluate the potential impact of Mentha piperita oil on physicochemical characteristics and oxidative stability of tuna oil microcapsules formed by spray drying using the partial substitution to whey protein isolate by carboxymethyl cellulose and pullulan. The emulsions before the drying process were characterized regarding size and ζ-potential, viscosity, surface tension. Confocal laser scanning microscopy showed that all emulsions were sphericity and homogeneous distribution without any visible particle aggregation. The microcapsules obtained after spray drying were characterized regarding microencapsulation efficiency, water activity, color, bulk density, flowability, scanning surface morphology and oxidative stability. The microcapsules were spherical shape had low water activity (0.11-0.23 aw). The microcapsules containing both tuna oil and Mentha piperita oil were smaller than others and addition of pullulan into wall materials improved the morphology of microcapsules. Microencapsulation efficiency of powdered oil ranged from 90% to 94%. Using Mentha piperita oil in the process of microencapsulation tuna oil enhanced the oxidative stability using whey protein isolate only or with carboxymethyl cellulose or pullulan as wall materials, resulting in improved storage stability and mask fishy odor. Therefore, it is foreseen using tuna-Mentha piperita oil mixture microcapsules in the applications of the food industries.

Keywords: Mentha piperita oil, microcapsule, tuna oil, whey protein isolate

Procedia PDF Downloads 343
5624 Nanoprecipitation with Ultrasonication for Enhancement of Oral Bioavailability of Fursemide: Pharmacokinetics and Pharmacodynamics Study in Rat Model

Authors: Malay K. Das, Bhanu P. Sahu

Abstract:

Furosemide is a weakly acidic diuretic indicated for treatment of edema and hypertension. It has very poor solubility but high permeability through stomach and upper gastrointestinal tract (GIT). Due to its limited solubility it has poor and variable oral bioavailability of 10-90%. The aim of this study was to enhance the oral bioavailability of furosemide by preparation of nanosuspensions. The nanosuspensions were prepared by nanoprecipitation with sonication using DMSO (dimethyl sulfoxide) as a solvent and water as an antisolvent (NA). The prepared nanosuspensions were sterically stabilized with polyvinyl acetate (PVA).These were characterized for particle size, ζ potential, polydispersity index, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), X-ray diffraction (XRD) pattern and release behavior. The effect of nanoprecipitation on oral bioavailability of furosemide nanosuspension was studied by in vitro dissolution and in vivo absorption study in rats and compared to pure drug. The stable nanosuspension was obtained with average size range of the precipitated nanoparticles between 150-300 nm and was found to be homogenous showing a narrow polydispersity index of 0.3±0.1. DSC and XRD studies indicated that the crystalline furosemide drug was converted to amorphous form upon precipitation into nanoparticles. The release profiles of nanosuspension formulation showed up to 81.2% release in 4 h. The in vivo studies on rats revealed a significant increase in the oral absorption of furosemide in the nanosuspension compared to pure drug. The AUC0→24 and Cmax values of nanosuspension were approximately 1.38 and 1.68-fold greater than that of pure drug, respectively. Furosemide nanosuspension showed 20.06±0.02 % decrease in systolic blood pressure compared to 13.37±0.02 % in plain furosemide suspension, respectively. The improved oral bioavailability and pharmacodynamics effect of furosemide may be due to the improved dissolution of furosemide in simulated gastric fluid which results in enhanced oral systemic absorption of furosemide from stomach region where it has better permeability.

Keywords: furosemide, nanosuspension, bioavailability enhancement, nanoprecipitation, oral drug delivery

Procedia PDF Downloads 568
5623 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 72
5622 Improved Benzene Selctivity for Methane Dehydroaromatization via Modifying the Zeolitic Pores by Dual Templating Approach

Authors: Deepti Mishra, K. K Pant, Xiu Song Zhao, Muxina Konarova

Abstract:

Catalytic transformation of simplest hydrocarbon methane into benzene and valuable chemicals over Mo/HZSM-5 has a great economic potential, however, it suffers serious hurdles due to the blockage in the micropores because of extensive coking at high temperature during methane dehydroaromatization (MDA). Under such conditions, it necessitates the design of micro/mesoporous ZSM-5, which has the advantages viz. uniform dispersibility of MoOx species, consequently the formation of active Mo sites in the micro/mesoporous channel and lower carbon deposition because of improved mass transfer rate within the hierarchical pores. In this study, we report a unique strategy to control the porous structures of ZSM-5 through a dual templating approach, utilizing C6 and C12 -surfactants as porogen. DFT studies were carried out to correlate the ZSM-5 framework development using the C6 and C12 surfactants with structure directing agent. The structural and morphological parameters of the synthesized ZSM-5 were explored in detail to determine the crystallinity, porosity, Si/Al ratio, particle shape, size, and acidic strength, which were further correlated with the physicochemical and catalytic properties of Mo modified HZSM-5 catalysts. After Mo incorporation, all the catalysts were tested for MDA reaction. From the activity test, it was observed that C6 surfactant-modified hierarchically porous Mo/HZSM-5(H) showed the highest benzene formation rate (1.5 μmol/gcat. s) and longer catalytic stability up to 270 min of reaction as compared to the conventional microporous Mo/HZSM-5(C). In contrary, C12 surfactant modified Mo/HZSM-5(D) is inferior towards MDA reaction (benzene formation rate: 0.5 μmol/gcat. s). We ascribed that the difference in MDA activity could be due to the hierarchically interconnected meso/microporous feature of Mo/HZSM-5(H) that precludes secondary reaction of coking from benzene and hence contributing substantial stability towards MDA reaction.

Keywords: hierarchical pores, Mo/HZSM-5, methane dehydroaromatization, coke deposition

Procedia PDF Downloads 73
5621 Experimental Investigation of the Thermal Performance of Fe2O3 under Magnetic Field in an Oscillating Heat Pipe

Authors: H. R. Goshayeshi, M. Khalouei, S. Azarberamman

Abstract:

This paper presents an experimental investigation regarding the use of Fe2O3 nano particles added to kerosene as a working fluid, under magnetic field. The experiment was made on Oscillating Heat Pipe (OHP). The experiment was performed in order to measure the temperature distribution and compare the heat transfer rate of the oscillating heat pipe with and without magnetic Field. Results showed that the addition of Fe2o3 nano particles under magnetic field improved thermal performance of OHP, compare with non-magnetic field. Furthermore applying a magnetic field enhance the heat transfer characteristic of Fe2O3 in both start up and steady state conditions. This paper presents an experimental investigation regarding the use of Fe2O3 nano particles added to kerosene as a working fluid, under magnetic field. The experiment was made on Oscillating Heat Pipe (OHP). The experiment was performed in order to measure the temperature distribution and compare the heat transfer rate of the oscillating heat pipe with and without magnetic Field. Results showed that the addition of Fe2o3 nano particles under magnetic field improved thermal performance of OHP, compare with non-magnetic field. Furthermore applying a magnetic field enhance the heat transfer characteristic of Fe2O3 in both start up and steady state conditions.

Keywords: experimental, oscillating heat pipe, heat transfer, magnetic field

Procedia PDF Downloads 255
5620 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training

Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto

Abstract:

In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.

Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks

Procedia PDF Downloads 87
5619 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 61
5618 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 123
5617 The Effect of Core Training on Physical Fitness Characteristics in Male Volleyball Players

Authors: Sibel Karacaoglu, Fatma Ç. Kayapinar

Abstract:

The aim of the study is to investigate the effect of the core training program on physical fitness characteristics and body composition in male volleyball players. 26 male university volleyball team players aged between 19 to 24 years who had no health problems and injury participated in the study. Subjects were divided into training (TG) and control groups (CG) as randomly. Data from twenty-one players who completed all training sessions were used for statistical analysis (TG,n=11; CG,n=10). A core training program was applied to the training group three days a week for 10 weeks. On the other hand, the control group did not receive any training. Before and after the 10-week training program, pre- and post-testing comprised of body composition measurements (weight, BMI, bioelectrical impedance analysis) and physical fitness measurements including flexibility (sit and reach test), muscle strength (back, leg and grip strength by dynamometer), muscle endurance (sit-ups and push-ups tests), power (one-legged jump and vertical jump tests), speed (20m sprint, 30m sprint) and balance tests (one-legged standing test) were performed. Changes of pre- and post- test values of the groups were determined by using dependent t test. According to the statistical analysis of data, no significant difference was found in terms of body composition in the both groups for pre- and post- test values. In the training group, all physical fitness measurements improved significantly after core training program (p<0.05) except 30m speed and handgrip strength (p>0.05). On the hand, only 20m speed test values improved after post-test period (p<0.05), but the other physical fitness tests values did not differ (p>0.05) between pre- and post- test measurement in the control group. The results of the study suggest that the core training program has positive effect on physical fitness characteristics in male volleyball players.

Keywords: body composition, core training, physical fitness, volleyball

Procedia PDF Downloads 342
5616 Design of Nanoreinforced Polyacrylamide-Based Hybrid Hydrogels for Bone Tissue Engineering

Authors: Anuj Kumar, Kummara M. Rao, Sung S. Han

Abstract:

Bone tissue engineering has emerged as a potentially alternative method for localized bone defects or diseases, congenital deformation, and surgical reconstruction. The designing and the fabrication of the ideal scaffold is a great challenge, in restoring of the damaged bone tissues via cell attachment, proliferation, and differentiation under three-dimensional (3D) biological micro-/nano-environment. In this case, hydrogel system composed of high hydrophilic 3D polymeric-network that is able to mimic some of the functional physical and chemical properties of the extracellular matrix (ECM) and possibly may provide a suitable 3D micro-/nano-environment (i.e., resemblance of native bone tissues). Thus, this proposed hydrogel system is highly permeable and facilitates the transport of the nutrients and metabolites. However, the use of hydrogels in bone tissue engineering is limited because of their low mechanical properties (toughness and stiffness) that continue to posing challenges in designing and fabrication of tough and stiff hydrogels along with improved bioactive properties. For this purpose, in our lab, polyacrylamide-based hybrid hydrogels were synthesized by involving sodium alginate, cellulose nanocrystals and silica-based glass using one-step free-radical polymerization. The results showed good in vitro apatite-forming ability (biomineralization) and improved mechanical properties (under compression in the form of strength and stiffness in both wet and dry conditions), and in vitro osteoblastic (MC3T3-E1 cells) cytocompatibility. For in vitro cytocompatibility assessment, both qualitative (attachment and spreading of cells using FESEM) and quantitative (cell viability and proliferation using MTT assay) analyses were performed. The obtained hybrid hydrogels may potentially be used in bone tissue engineering applications after establishment of in vivo characterization.

Keywords: bone tissue engineering, cellulose nanocrystals, hydrogels, polyacrylamide, sodium alginate

Procedia PDF Downloads 147
5615 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 175
5614 Benefits of Whole-Body Vibration Training on Lower-Extremity Muscle Strength and Balance Control in Middle-Aged and Older Adults

Authors: Long-Shan Wu, Ming-Chen Ko, Chien-Chang Ho, Po-Fu Lee, Jenn-Woei Hsieh, Ching-Yu Tseng

Abstract:

This study aimed to determine the effects of whole-body vibration (WBV) training on lower-extremity muscle strength and balance control performance among community-dwelling middle-aged and older adults in the United States. Twenty-nine participants without any contraindication of performing WBV exercise completed all the study procedures. Participants were randomly assigned to do body weight exercise with either an individualized vibration frequency and amplitude, a fixed vibration frequency and amplitude, or no vibration. Isokinetic knee extensor power, limits of stability, and sit-to-stand tests were performed at the baseline and after 8 weeks of training. Neither the individualized frequency-amplitude WBV training protocol nor the fixed frequency-amplitude WBV training protocol improved isokinetic knee extensor power. The limits of stability endpoint excursion score for the individualized frequency-amplitude group increased by 8.8 (12.9%; p = 0.025) after training. No significant differences were observed in fixed and control group. The maximum excursion score for the individualized frequency-amplitude group at baseline increased by 9.2 (11.5%; p = 0.006) after training. The average weight transfer time score significantly decreased by 0.21 s in the fixed group. The participants in the individualized group showed a significant increase (3.2%) in weight rising index score after 8 weeks of WBV training. These results suggest that 8 weeks of WBV training improved limit of stability and sit-to-stand performance. Future studies need to determine whether WBV training improves other factors that can influence posture control.

Keywords: whole-body vibration training, muscle strength, balance control, middle-aged and older adults

Procedia PDF Downloads 220
5613 Formation of in-situ Ceramic Phase in N220 Nano Carbon Containing Low Carbon Mgo-C Refractory

Authors: Satyananda Behera, Ritwik Sarkar

Abstract:

In iron and steel industries, MgO–C refractories are widely used in basic oxygen furnaces, electric arc furnaces and steel ladles due to their excellent corrosion resistance, thermal shock resistance, and other excellent hot properties. Conventionally magnesia carbon refractories contain about 8-20 wt% of carbon but the use of carbon is also associate with disadvantages like oxidation, low fracture strength, high heat loss and higher carbon pick up in steel. So, MgO-C refractory having low carbon content without compromising the beneficial properties is the challenge. Nano carbon, having finer particles, can mix and distribute within the entire matrix uniformly and can result in improved mechanical, thermo-mechanical, corrosion and other refractory properties. Previous experiences with the use of nano carbon in low carbon MgO-C refractory have indicated an optimum range of use of nano carbon around 1 wt%. This optimum nano carbon content was used in MgO-C compositions with flaky graphite followed by aluminum and silicon metal powder as an anti-oxidant. These low carbon MgO-C refractory compositions were prepared by conventional manufacturing techniques. At the same time 16 wt. % flaky graphite containing conventional MgO-C refractory was also prepared parallel under similar conditions. The developed products were characterized for various refractory related properties. Nano carbon containing compositions showed better mechanical, thermo-mechanical properties, and oxidation resistance compared to that of conventional composition. Improvement in the properties is associated with the formation of in-situ ceramic phase-like aluminum carbide, silicon carbide, and magnesium aluminum spinel. Higher surface area and higher reactivity of N220 nano carbon black resulted in greater formation in-situ ceramic phases, even at a much lower amount. Nano carbon containing compositions were found to have improved properties in MgO-C refractories compared to that of the conventional ones at much lower total carbon content.

Keywords: N220nano carbon black, refractory properties, conventionally manufacturing techniques, conventional magnesia carbon refractories

Procedia PDF Downloads 359
5612 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 293
5611 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance

Authors: Abdullah Al Farwan, Ya Zhang

Abstract:

In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.

Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance

Procedia PDF Downloads 158
5610 Modeling of Leaks Effects on Transient Dispersed Bubbly Flow

Authors: Mohand Kessal, Rachid Boucetta, Mourad Tikobaini, Mohammed Zamoum

Abstract:

Leakage problem of two-component fluids flow is modeled for a transient one-dimensional homogeneous bubbly flow and developed by taking into account the effect of a leak located at the middle point of the pipeline. The corresponding three conservation equations are numerically resolved by an improved characteristic method. The obtained results are explained and commented in terms of physical impact on the flow parameters.

Keywords: fluid transients, pipelines leaks, method of characteristics, leakage problem

Procedia PDF Downloads 467
5609 Cup-Cage Construct for Treatment of Severe Acetabular Bone Loss in Revision Total Hip Arthroplasty: Midterm Clinical and Radiographic Outcomes

Authors: Faran Chaudhry, Anser Daud, Doris Braunstein, Oleg Safir, Allan Gross, Paul Kuzyk

Abstract:

Background: Acetabular reconstruction in the context of massive acetabular bone loss is challenging. In rare scenarios where the extent of bone loss precludes shell placement (cup-cage), reconstruction at our center consisted of a cage combined with highly porous metal augments. This study evaluates survivorship, complications, and functional outcomes using this technique. Methods: A total of 131 cup-cage implants (129 patients) were included in our retrospective review of revisions of total hip arthroplasty from January 2003 to January 2022. Among these cases, 100/131 (76.3%) were women, the mean age at surgery time was 68.7 years (range, 29.0 to 92.0; SD, 12.4), and the mean follow-up was 7.7 years (range, 0.02 to 20.3; SD, 5.1). Kaplan-Meier survivorship analysis was conducted with failure defined as revision surgery and/or failure of the cup-cage reconstruction. Results: A total of 30 implants (23%) reached the study endpoint involving all-cause revision. Overall survivorship was 74.8% at 10 years and 69.8% at 15 years. Reasons for revision included infection 12/131 (9.1%), dislocation 10/131 (7.6%), aseptic loosening of cup and/or cage 5/131 (3.8%), and aseptic loosening of the femoral stem 2/131 (1.5%). The mean LLD improved from 12.2 ± 15.9 mm to 3.9 ± 11.8 (p<0.05). The horizontal and vertical hip centres on plain film radiographs were significantly improved (p<0.05). Functionally, there was a decrease in the number of patients requiring the use of gait aids, with fewer patients (34, 25.9%) using a cane, walker, or wheelchair post-operatively compared to pre-operatively (58, 44%). There was a significant increase in the number of independent ambulators from 24 to 47 (36%). Conclusion: The cup-cage construct is a reliable treatment option for the treatment of various acetabular defects. There are favourable survivorship, clinical and radiographic outcomes, with a satisfactory complication rate.

Keywords: revision total hip arthroplasty, acetabular defect, pelvic discontinuity, trabecular metal augment, cup-cage

Procedia PDF Downloads 60
5608 Mood Symptom Severity in Service Members with Posttraumatic Stress Symptoms after Service Dog Training

Authors: Tiffany Riggleman, Andrea Schultheis, Kalyn Jannace, Jerika Taylor, Michelle Nordstrom, Paul F. Pasquina

Abstract:

Introduction: Posttraumatic Stress (PTS) and Posttraumatic Stress Disorder (PTSD) remain significant problems for military and veteran communities. Symptoms of PTSD often include poor sleep, intrusive thoughts, difficulty concentrating, and trouble with emotional regulation. Unfortunately, despite its high prevalence, service members diagnosed with PTSD often do not seek help, usually because of the perceived stigma surrounding behavioral health care. To help address these challenges, non-pharmacological, therapeutic approaches are being developed to help improve care and enhance compliance. The Service Dog Training Program (SDTP), which involves teaching patients how to train puppies to become mobility service dogs, has been successfully implemented into PTS/PTSD care programs with anecdotal reports of improved outcomes. This study was designed to assess the biopsychosocial effects of SDTP from military beneficiaries with PTS symptoms. Methods: Individuals between the ages of 18 and 65 with PTS symptom were recruited to participate in this prospective study. Each subject completes 4 weeks of baseline testing, followed by 6 weeks of active service dog training (twice per week for one hour sessions) with a professional service dog trainer. Outcome measures included the Posttraumatic Stress Checklist for the DSM-5 (PCL-5), Generalized Anxiety Disorder questionnaire-7 (GAD-7), Patient Health Questionnaire-9 (PHQ-9), social support/interaction, anthropometrics, blood/serum biomarkers, and qualitative interviews. Preliminary analysis of 17 participants examined mean scores on the GAD-7, PCL-5, and PHQ-9, pre- and post-SDTP, and changes were assessed using Wilcoxon Signed-Rank tests. Results: Post-SDTP, there was a statistically significant mean decrease in PCL-5 scores of 13.5 on an 80-point scale (p=0.03) and a significant mean decrease of 2.2 in PHQ-9 scores on a 27 point scale (p=0.04), suggestive of decreased PTSD and depression symptoms. While there was a decrease in mean GAD-7 scores post-SDTP, the difference was not significant (p=0.20). Recurring themes among results from the qualitative interviews include decreased pain, forgetting about stressors, improved sense of calm, increased confidence, improved communication, and establishing a connection with the service dog. Conclusion: Preliminary results of the first 17 participants in this study suggest that individuals who received SDTP had a statistically significant decrease in PTS symptom, as measured by the PCL-5 and PHQ-9. This ongoing study seeks to enroll a total of 156 military beneficiaries with PTS symptoms. Future analyses will include additional psychological outcomes, pain scores, blood/serum biomarkers, and other measures of the social aspects of PTSD, such as relationship satisfaction and sleep hygiene.

Keywords: post-concussive syndrome, posttraumatic stress, service dog, service dog training program, traumatic brain injury

Procedia PDF Downloads 106
5607 Heavy Oil Recovery with Chemical Viscosity-Reduction: An Innovative Low-Carbon and Low-Cost Technology

Authors: Lin Meng, Xi Lu, Haibo Wang, Yong Song, Lili Cao, Wenfang Song, Yong Hu

Abstract:

China has abundant heavy oil resources, and thermal recovery is the main recovery method for heavy oil reservoirs. However, high energy consumption, high carbon emission and high production costs make heavy oil thermal recovery unsustainable. It is urgent to explore a replacement for developing technology. A low Carbon and cost technology of heavy oil recovery, chemical viscosity-reduction in layer (CVRL), is developed by the petroleum exploration and development research institute of Sinopec via investigated mechanisms, synthesized products, and improved oil production technologies, as follows: (1) Proposed a cascade viscous mechanism of heavy oil. Asphaltene and resin grow from free molecules to associative structures further to bulk aggregations by π - π stacking and hydrogen bonding, which causes the high viscosity of heavy oil. (2) Aimed at breaking the π - π stacking and hydrogen bond of heavy oil, the copolymer of N-(3,4-dihydroxyphenethyl) acryl amide and 2-Acrylamido-2-methylpropane sulfonic acid was synthesized as a viscosity reducer. It achieves a viscosity reduction rate of>80% without shearing for heavy oil (viscosity < 50000 mPa‧s), of which fluidity is evidently improved in the layer. (3) Synthesized hydroxymethyl acrylamide-maleic acid-decanol ternary copolymer self-assembly plugging agent. The particle size is 0.1 μm-2 mm adjustable, and the volume is 10-500 times controllable, which can achieve the efficient transportation of viscosity reducer to enriched oil areas. CVRL has applied 400 wells until now, increasing oil production by 470000 tons, saving 81000 tons of standard coal, reducing CO2 emissions by 174000 tons, and reducing production costs by 60%. It promotes the transformation of heavy oil towards low energy consumption, low carbon emissions, and low-cost development.

Keywords: heavy oil, chemical viscosity-reduction, low carbon, viscosity reducer, plugging agent

Procedia PDF Downloads 64
5606 Deep Brain Stimulation and Motor Cortex Stimulation for Post-Stroke Pain: A Systematic Review and Meta-Analysis

Authors: Siddarth Kannan

Abstract:

Objectives: Deep Brain Stimulation (DBS) and Motor Cortex stimulation (MCS) are innovative interventions in order to treat various neuropathic pain disorders such as post-stroke pain. While each treatment has a varying degree of success in managing pain, comparative analysis has not yet been performed, and the success rates of these techniques using validated, objective pain scores have not been synthesised. The aim of this study was to compare the effect of pain relief offered by MCS and DBS on patients with post-stroke pain and to assess if either of these procedures offered better results. Methods: A systematic review and meta-analysis were conducted in accordance with PRISMA guidelines (PROSPEROID CRD42021277542). Three databases were searched, and articles published from 2000 to June 2023 were included (last search date 25 June 2023). Meta-analysis was performed using random effects models. We evaluated the performance of DBS or MCS by assessing studies that reported pain relief using the Visual Analogue Scale (VAS). Data analysis of descriptive statistics was performed using SPSS (Version 27; IBM; Armonk; NY; USA). R statistics (Rstudio Version 4.0.1) was used to perform meta-analysis. Results: Of the 478 articles identified, 27 were included in the analysis (232 patients- 117 DBS & 115 MCS). The pooled number of patients who improved after DBS was 0.68 (95% CI, 0.57-0.77, I2=36%). The pooled number of patients who improved after MCS was 0.72 (95% CI, 0.62-0.80, I2=59%). Further sensitivity analysis was done to include only studies with a minimum of 5 patients in order to assess if there was any impact on the overall results. Nine studies each for DBS and MCS met these criteria. There seemed to be no significant difference in results. Conclusions: The use of surgical interventions such as DBS and MCS is an upcoming field for the treatment of post-stroke pain, with limited studies exploring and comparing these two techniques. While our study shows that MCS might be a slightly better treatment option, further research would need to be done in order to determine the appropriate surgical intervention for post-stroke pain.

Keywords: post-stroke pain, deep brain stimulation, motor cortex stimulation, pain relief

Procedia PDF Downloads 129
5605 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 78
5604 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts

Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira

Abstract:

In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.

Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design

Procedia PDF Downloads 107