Search results for: computer processing of large databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12744

Search results for: computer processing of large databases

9294 Linkage between a Plant-based Diet and Visual Impairment: A Systematic Review and Meta-Analysis

Authors: Cristina Cirone, Katrina Cirone, Monali S. Malvankar-Mehta

Abstract:

Purpose: An increased risk of visual impairment has been observed in individuals lacking a balanced diet. The purpose of this paper is to characterize the relationship between plant-based diets and specific ocular outcomes among adults. Design: Systematic review and meta-analysis. Methods: This systematic review and meta-analysis were conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement guidelines. The databases MEDLINE, EMBASE, Cochrane, and PubMed, were systematically searched up until May 27, 2021. Of the 503 articles independently screened by two reviewers, 21 were included in this review. Quality assessment and data extraction were performed by both reviewers. Meta-analysis was conducted using STATA 15.0. Fixed-effect and random-effect models were computed based on heterogeneity. Results: A total of 503 studies were identified which then underwent duplicate removal and a title and abstract screen. The remaining 61 studies underwent a full-text screen, 21 progressed to data extraction and fifteen were included in the quantitative analysis. Meta-analysis indicated that regular consumption of fish (OR = 0.70; CI: [0.62-0.79]) and skim milk, poultry, and non-meat animal products (OR = 0.70; CI: [0.61-0.79]) is positively correlated with a reduced risk of visual impairment (age-related macular degeneration, age-related maculopathy, cataract development, and central geographic atrophy) among adults. Consumption of red meat [OR = 1.41; CI: [1.07-1.86]) is associated with an increased risk of visual impairment. Conclusion: Overall, a pescatarian diet is associated with the most favorable visual outcomes among adults, while the consumption of red meat appears to negatively impact vision. Results suggest a need for more local and government-led interventions promoting a healthy and balanced diet.

Keywords: plant-based diet, pescatarian diet, visual impairment, systematic review, meta-analysis

Procedia PDF Downloads 185
9293 Waste Burial to the Pressure Deficit Areas in the Eastern Siberia

Authors: L. Abukova, O. Abramova, A. Goreva, Y. Yakovlev

Abstract:

Important executive decisions on oil and gas production stimulation in Eastern Siberia have been recently taken. There are unique and large fields of oil, gas, and gas-condensate in Eastern Siberia. The Talakan, Koyumbinskoye, Yurubcheno-Tahomskoye, Kovykta, Chayadinskoye fields are supposed to be developed first. It will result in an abrupt increase in environmental load on the nature of Eastern Siberia. In Eastern Siberia, the introduction of ecological imperatives in hydrocarbon production is still realistic. Underground water movement is the one of the most important factors of the ecosystems condition management. Oil and gas production is associated with the forced displacement of huge water masses, mixing waters of different composition, and origin that determines the extent of anthropogenic impact on water drive systems and their protective reaction. An extensive hydrogeological system of the depression type is identified in the pre-salt deposits here. Pressure relieve here is steady up to the basement. The decrease of the hydrodynamic potential towards the basement with such a gradient resulted in reformation of the fields in process of historical (geological) development of the Nepsko-Botuobinskaya anteclise. The depression hydrodynamic systems are characterized by extremely high isolation and can only exist under such closed conditions. A steady nature of water movement due to a strictly negative gradient of reservoir pressure makes it quite possible to use environmentally-harmful liquid substances instead of water. Disposal of the most hazardous wastes is the most expedient in the deposits of the crystalline basement in certain structures distant from oil and gas fields. The time period for storage of environmentally-harmful liquid substances may be calculated by means of the geological time scales ensuring their complete prevention from releasing into environment or air even during strong earthquakes. Disposal of wastes of chemical and nuclear industries is a matter of special consideration. The existing methods of storage and disposal of wastes are very expensive. The methods applied at the moment for storage of nuclear wastes at the depth of several meters, even in the most durable containers, constitute a potential danger. The enormous size of the depression system of the Nepsko-Botuobinskaya anteclise makes it possible to easily identify such objects at the depth below 1500 m where nuclear wastes will be stored indefinitely without any environmental impact. Thus, the water drive system of the Nepsko-Botuobinskaya anteclise is the ideal object for large-volume injection of environmentally harmful liquid substances even if there are large oil and gas accumulations in the subsurface. Specific geological and hydrodynamic conditions of the system allow the production of hydrocarbons from the subsurface simultaneously with the disposal of industrial wastes of oil and gas, mining, chemical, and nuclear industries without any environmental impact.

Keywords: Eastern Siberia, formation pressure, underground water, waste burial

Procedia PDF Downloads 259
9292 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission

Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan

Abstract:

As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.

Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster

Procedia PDF Downloads 208
9291 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 100
9290 Controlling Differential Settlement of Large Reservoir through Soil Structure Interaction Approach

Authors: Madhav Khadilkar

Abstract:

Construction of a large standby reservoir was required to provide secure water supply. The new reservoir was required to be constructed at the same location of an abandoned old open pond due to space constraints. Some investigations were carried out earlier to improvise and re-commission the existing pond. But due to a lack of quantified risk of settlement from voids in the underlying limestone, the shallow foundations were not found feasible. Since the reservoir was resting on hard strata for about three-quarter of plan area and one quarter was resting on soil underlying with limestone and considerably low subgrade modulus. Further investigations were carried out to ascertain the locations and extent of voids within the limestone. It was concluded that the risk due to lime dissolution was acceptably low, and the site was found geotechnically feasible. The hazard posed by limestone dissolution was addressed through the integrated structural and geotechnical analysis and design approach. Finite Element Analysis was carried out to quantify the stresses and differential settlement due to various probable loads and soil-structure interaction. Walls behaving as cantilever under operational loads were found undergoing in-plane bending and tensile forces due to soil-structure interaction. Sensitivity analysis for varying soil subgrade modulus was carried out to check the variation in the response of the structure and magnitude of stresses developed. The base slab was additionally checked for the loss of soil contact due to lime pocket formations at random locations. The expansion and contraction joints were planned to receive minimal additional forces due to differential settlement. The reservoir was designed to sustain the actions corresponding to allowable deformation limits per code, and geotechnical measures were proposed to achieve the soil parameters set in structural analysis.

Keywords: differential settlement, limestone dissolution, reservoir, soil structure interaction

Procedia PDF Downloads 155
9289 Dyeing with Natural Dye from Pterocarpus indicus Extract Using Eco-Friendly Mordants

Authors: Ploysai Ohama, Nuttawadee Hanchengchai, Thiva Saksri

Abstract:

Natural dye extracted from Pterocarpus indicus was applied to a cotton fabric and silk yarn by dyeing processing different eco-friendly mordants. Analytical studies such as UV–VIS spectrophotometry and gravimetric analysis were performed on the extracts. The color of each dyed material was investigated in terms of the CIELAB (L*, a* and b*) and K/S values. Cotton fabric dyed without mordants had a shade of greenish-brown, while those post-mordanted with selected eco-friendly mordants such as alum, lemon juice and limewater result in a variety of brown and darker color shade of fabric.

Keywords: natural dyes, plant materials, dyeing, mordant

Procedia PDF Downloads 415
9288 A New Scheme for Chain Code Normalization in Arabic and Farsi Scripts

Authors: Reza Shakoori

Abstract:

This paper presents a structural correction of Arabic and Persian strokes using manipulation of their chain codes in order to improve the rate and performance of Persian and Arabic handwritten word recognition systems. It collects pure and effective features to represent a character with one consolidated feature vector and reduces variations in order to decrease the number of training samples and increase the chance of successful classification. Our results also show that how the proposed approaches can simplify classification and consequently recognition by reducing variations and possible noises on the chain code by keeping orientation of characters and their backbone structures.

Keywords: Arabic, chain code normalization, OCR systems, image processing

Procedia PDF Downloads 404
9287 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction

Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling

Procedia PDF Downloads 74
9286 Computational Tool for Surface Electromyography Analysis; an Easy Way for Non-Engineers

Authors: Fabiano Araujo Soares, Sauro Emerick Salomoni, Joao Paulo Lima da Silva, Igor Luiz Moura, Adson Ferreira da Rocha

Abstract:

This paper presents a tool developed in the Matlab platform. It was developed to simplify the analysis of surface electromyography signals (S-EMG) in a way accessible to users that are not familiarized with signal processing procedures. The tool receives data by commands in window fields and generates results as graphics and excel tables. The underlying math of each S-EMG estimator is presented. Setup window and result graphics are presented. The tool was presented to four non-engineer users and all of them managed to appropriately use it after a 5 minutes instruction period.

Keywords: S-EMG estimators, electromyography, surface electromyography, ARV, RMS, MDF, MNF, CV

Procedia PDF Downloads 559
9285 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections

Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz

Abstract:

In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.

Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process

Procedia PDF Downloads 213
9284 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 528
9283 Inter-Generational Benefits of Improving Access to Justice for Women: Evidence from Peru

Authors: Iva Trako, Maris Micaela Sviatschi, Guadalupe Kavanaugh

Abstract:

Domestic violence is a major concern in developing countries, with important social, economic and health consequences. However, institutions do not usually address the problems facing women or ethnic and religious minorities. For example, the police do very little to stop domestic violence in rural areas of developing countries. This paper exploits the introduction of women’s justice centers (WJCs) in Peru to provide causal estimates on the effects of improving access to justice for women and children. These centers offer a new integrated public service model for women by including medical, psychological and legal support in cases of violence against women. Our empirical approach uses a difference in difference estimation exploiting variation over time and space in the opening of WJC together with province-by-year fixed effects. Exploiting administrative data from health providers and district attorney offices, we find that after the opening of these centers, there are important improvements on women's welfare: a large reduction in femicides and female hospitalizations for assault. Moreover, using geo-coded household surveys we find evidence that the existence of these services reduces domestic violence, improves women's health, increases women's threat points and, therefore, lead to household decisions that are more aligned with their interests. Using administrative data on the universe of schools, we find large gains on human capital for their children: affected children are more likely to enroll, attend school and have better grades in national exams, instead of working for the family. In sum, the evidence in this paper shows that providing access to justice for women can be a powerful tool to reduce domestic violence and increase education of children, suggesting a positive inter-generational benefit.

Keywords: access to justice, domestic violence, education, household bargaining

Procedia PDF Downloads 184
9282 Examining Patterns in Ethnoracial Diversity in Los Angeles County Neighborhoods, 2016, Using Geographic Information System Analysis and Entropy Measure of Diversity

Authors: Joseph F. Cabrera, Rachael Dela Cruz

Abstract:

This study specifically examines patterns that define ethnoracially diverse neighborhoods. Ethnoracial diversity is important as it facilitates cross-racial interactions within neighborhoods which have been theorized to be associated with such outcomes as intergroup harmony, the reduction of racial and ethnic prejudice and discrimination, and increases in racial tolerance. Los Angeles (LA) is an ideal location to study ethnoracial spatial patterns as it is one of the most ethnoracially diverse cities in the world. A large influx of Latinos, as well as Asians, have contributed to LA’s urban landscape becoming increasingly diverse over several decades. Our dataset contains all census tracts in Los Angeles County in 2016 and incorporates Census and ACS demographic and spatial data. We quantify ethnoracial diversity using a derivative of Simpson’s Diversity Index and utilize this measure to test previous literature that suggests Latinos are one of the key drivers of changing ethnoracial spatial patterns in Los Angeles. Preliminary results suggest that there has been an overall increase in ethnoracial diversity in Los Angeles neighborhoods over the past sixteen years. Patterns associated with this trend include decreases in predominantly white and black neighborhoods, increases in predominantly Latino and Asian neighborhoods, and a general decrease in the white populations of the most diverse neighborhoods. A similar pattern is seen in neighborhoods with large Latino increases- a decrease in white population, but with an increase in Asian and black populations. We also found support for previous research that suggests increases in Latino and Asian populations act as a buffer, allowing for black population increases without a sizeable decrease in the white population. Future research is needed to understand the underlying causes involved in many of the patterns and trends highlighted in this study.

Keywords: race, race and interaction, racial harmony, social interaction

Procedia PDF Downloads 132
9281 The Views of Health Care Professionals outside of the General Practice Setting on the Provision of Oral Contraception in Comparison to Long-Acting Reversible Contraception

Authors: Carri Welsby, Jessie Gunson, Pen Roe

Abstract:

Currently, there is limited research examining health care professionals (HCPs) views on long-acting reversible contraception (LARC) advice and prescription, particularly outside of the general practice (GP) setting. The aim of this study is to systematically review existing evidence around the barriers and enablers of oral contraception (OC) in comparison to LARC, as perceived by HCPs in non-GP settings. Five electronic databases were searched in April 2018 using terms related to LARC, OC, HCPs, and views, but not terms related to GPs. Studies were excluded if they concerned emergency oral contraception, male contraceptives, contraceptive use in conjunction with a health condition(s), developing countries, GPs and GP settings, were non-English or was not published before 2013. A total of six studies were included for systematic reviewing. Five key areas emerged, under which themes were categorised, including (1) understanding HCP attitudes and counselling practices towards contraceptive methods; (2) assessment of HCP attitudes and beliefs about contraceptive methods; (3) misconceptions and concerns towards contraceptive methods; and (4) influences on views, attitudes, and beliefs of contraceptive methods. Limited education and training of HCPs exists around LARC provision, particularly compared to OC. The most common misconception inhibiting HCPs contraceptive information delivery to women was the belief that LARC was inappropriate for nulliparous women. In turn, by not providing the correct information on a variety of contraceptive methods, HCP counselling practices were disempowering for women and restricted them from accessing reproductive justice. Educating HCPs to be able to provide accurate and factual information to women on all contraception is vital to encourage a woman-centered approach during contraceptive counselling and promote informed choices by women.

Keywords: advice, contraceptives, health care professionals, long acting reversible contraception, oral contraception, reproductive justice

Procedia PDF Downloads 160
9280 The Impact of Artificial Intelligence on Medicine Production

Authors: Yasser Ahmed Mahmoud Ali Helal

Abstract:

The use of CAD (Computer Aided Design) technology is ubiquitous in the architecture, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of architecture schools in Nigeria as an important part of the training module. This article examines the ethical issues involved in implementing CAD (Computer Aided Design) content into the architectural education curriculum. Using existing literature, this study begins with the benefits of integrating CAD into architectural education and the responsibilities of different stakeholders in the implementation process. It also examines issues related to the negative use of information technology and the perceived negative impact of CAD use on design creativity. Using a survey method, data from the architecture department of University was collected to serve as a case study on how the issues raised were being addressed. The article draws conclusions on what ensures successful ethical implementation. Millions of people around the world suffer from hepatitis C, one of the world's deadliest diseases. Interferon (IFN) is treatment options for patients with hepatitis C, but these treatments have their side effects. Our research focused on developing an oral small molecule drug that targets hepatitis C virus (HCV) proteins and has fewer side effects. Our current study aims to develop a drug based on a small molecule antiviral drug specific for the hepatitis C virus (HCV). Drug development using laboratory experiments is not only expensive, but also time-consuming to conduct these experiments. Instead, in this in silicon study, we used computational techniques to propose a specific antiviral drug for the protein domains of found in the hepatitis C virus. This study used homology modeling and abs initio modeling to generate the 3D structure of the proteins, then identifying pockets in the proteins. Acceptable lagans for pocket drugs have been developed using the de novo drug design method. Pocket geometry is taken into account when designing ligands. Among the various lagans generated, a new specific for each of the HCV protein domains has been proposed.

Keywords: drug design, anti-viral drug, in-silicon drug design, hepatitis C virus (HCV) CAD (Computer Aided Design), CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication

Procedia PDF Downloads 83
9279 Multidisciplinary Approach for a Tsunami Reconstruction Plan in Coquimbo, Chile

Authors: Ileen Van den Berg, Reinier J. Daals, Chris E. M. Heuberger, Sven P. Hildering, Bob E. Van Maris, Carla M. Smulders, Rafael Aránguiz

Abstract:

Chile is located along the subduction zone of the Nazca plate beneath the South American plate, where large earthquakes and tsunamis have taken place throughout history. The last significant earthquake (Mw 8.2) occurred in September 2015 and generated a destructive tsunami, which mainly affected the city of Coquimbo (71.33°W, 29.96°S). The inundation area consisted of a beach, damaged seawall, damaged railway, wetland and old neighborhood; therefore, local authorities started a reconstruction process immediately after the event. Moreover, a seismic gap has been identified in the same area, and another large event could take place in the near future. The present work proposed an integrated tsunami reconstruction plan for the city of Coquimbo that considered several variables such as safety, nature & recreation, neighborhood welfare, visual obstruction, infrastructure, construction process, and durability & maintenance. Possible future tsunami scenarios are simulated by means of the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model with 5 nested grids and a higher grid resolution of ~10 m. Based on the score from a multi-criteria analysis, the costs of the alternatives and a preference for a multifunctional solution, the alternative that includes an elevated coastal road with floodgates to reduce tsunami overtopping and control the return flow of a tsunami was selected as the best solution. It was also observed that the wetlands are significantly restored to their former configuration; moreover, the dynamic behavior of the wetlands is stimulated. The numerical simulation showed that the new coastal protection decreases damage and the probability of loss of life by delaying tsunami arrival time. In addition, new evacuation routes and a smaller inundation zone in the city increase safety for the area.

Keywords: tsunami, Coquimbo, Chile, reconstruction, numerical simulation

Procedia PDF Downloads 241
9278 Experimental Study of Energy Absorption Efficiency (EAE) of Warp-Knitted Spacer Fabric Reinforced Foam (WKSFRF) Under Low-Velocity Impact

Authors: Amirhossein Dodankeh, Hadi Dabiryan, Saeed Hamze

Abstract:

Using fabrics to reinforce composites considerably leads to improved mechanical properties, including resistance to the impact load and the energy absorption of composites. Warp-knitted spacer fabrics (WKSF) are fabrics consisting of two layers of warp-knitted fabric connected by pile yarns. These connections create a space between the layers filled by pile yarns and give the fabric a three-dimensional shape. Today because of the unique properties of spacer fabrics, they are widely used in the transportation, construction, and sports industries. Polyurethane (PU) foams are commonly used as energy absorbers, but WKSF has much better properties in moisture transfer, compressive properties, and lower heat resistance than PU foam. It seems that the use of warp-knitted spacer fabric reinforced PU foam (WKSFRF) can lead to the production and use of composite, which has better properties in terms of energy absorption from the foam, its mold formation is enhanced, and its mechanical properties have been improved. In this paper, the energy absorption efficiency (EAE) of WKSFRF under low-velocity impact is investigated experimentally. The contribution of the effect of each of the structural parameters of the WKSF on the absorption of impact energy has also been investigated. For this purpose, WKSF with different structures such as two different thicknesses, small and large mesh sizes, and position of the meshes facing each other and not facing each other were produced. Then 6 types of composite samples with different structural parameters were fabricated. The physical properties of samples like weight per unit area and fiber volume fraction of composite were measured for 3 samples of any type of composites. Low-velocity impact with an initial energy of 5 J was carried out on 3 samples of any type of composite. The output of the low-velocity impact test is acceleration-time (A-T) graph with a lot deviation point, in order to achieve the appropriate results, these points were removed using the FILTFILT function of MATLAB R2018a. Using Newtonian laws of physics force-displacement (F-D) graph was drawn from an A-T graph. We know that the amount of energy absorbed is equal to the area under the F-D curve. Determination shows the maximum energy absorption is 2.858 J which is related to the samples reinforced with fabric with large mesh, high thickness, and not facing of the meshes relative to each other. An index called energy absorption efficiency was defined, which means absorption energy of any kind of our composite divided by its fiber volume fraction. With using this index, the best EAE between the samples is 21.6 that occurs in the sample with large mesh, high thickness, and meshes facing each other. Also, the EAE of this sample is 15.6% better than the average EAE of other composite samples. Generally, the energy absorption on average has been increased 21.2% by increasing the thickness, 9.5% by increasing the size of the meshes from small to big, and 47.3% by changing the position of the meshes from facing to non-facing.

Keywords: composites, energy absorption efficiency, foam, geometrical parameters, low-velocity impact, warp-knitted spacer fabric

Procedia PDF Downloads 170
9277 Clinical Efficacy of Nivolumab and Ipilimumab Combination Therapy for the Treatment of Advanced Melanoma: A Systematic Review and Meta-Analysis of Clinical Trials

Authors: Zhipeng Yan, Janice Wing-Tung Kwong, Ching-Lung Lai

Abstract:

Background: Advanced melanoma accounts for the majority of skin cancer death due to its poor prognosis. Nivolumab and ipilimumab are monoclonal antibodies targeting programmed cell death protein 1 (PD-1) and cytotoxic T-lymphocytes antigen 4 (CTLA-4). Nivolumab and ipilimumab combination therapy has been proven to be effective for advanced melanoma. This systematic review and meta-analysis are to evaluate its clinical efficacy and adverse events. Method: A systematic search was done on databases (Pubmed, Embase, Medline, Cochrane) on 21 June 2020. Search keywords were nivolumab, ipilimumab, melanoma, and randomised controlled trials. Clinical trials fulfilling the inclusion criteria were selected to evaluate the efficacy of combination therapy in terms of prolongation of progression-free survival (PFS), overall survival (OS), and objective response rate (ORR). The odd ratios and distributions of grade 3 or above adverse events were documented. Subgroup analysis was performed based on PD-L1 expression-status and BRAF-mutation status. Results: Compared with nivolumab monotherapy, the hazard ratios of PFS, OS and odd ratio of ORR in combination therapy were 0.64 (95% CI, 0.48-0.85; p=0.002), 0.84 (95% CI, 0.74-0.95; p=0.007) and 1.76 (95% CI, 1.51-2.06; p < 0.001), respectively. Compared with ipilimumab monotherapy, the hazard ratios of PFS, OS and odd ratio of ORR were 0.46 (95% CI, 0.37-0.57; p < 0.001), 0.54 (95% CI, 0.48-0.61; p < 0.001) and 6.18 (95% CI, 5.19-7.36; p < 0.001), respectively. In combination therapy, the odds ratios of grade 3 or above adverse events were 4.71 (95% CI, 3.57-6.22; p < 0.001) compared with nivolumab monotherapy, and 3.44 (95% CI, 2.49-4.74; p < 0.001) compared with ipilimumab monotherapy, respectively. High PD-L1 expression level and BRAF mutation were associated with better clinical outcomes in patients receiving combination therapy. Conclusion: Combination therapy is effective for the treatment of advanced melanoma. Adverse events were common but manageable. Better clinical outcomes were observed in patients with high PD-L1 expression levels and positive BRAF-mutation.

Keywords: nivolumab, ipilimumab, advanced melanoma, systematic review, meta-analysis

Procedia PDF Downloads 136
9276 Screening of Antagonistic/Synergistic Effect between Lactic Acid Bacteria (LAB) and Yeast Strains Isolated from Kefir

Authors: Mihriban Korukluoglu, Goksen Arik, Cagla Erdogan, Selen Kocakoglu

Abstract:

Kefir is a traditional fermented refreshing beverage which is known for its valuable and beneficial properties for human health. Mainly yeast species, lactic acid bacteria (LAB) strains and fewer acetic acid bacteria strains live together in a natural matrix named “kefir grain”, which is formed from various proteins and polysaccharides. Different microbial species live together in slimy kefir grain and it has been thought that synergetic effect could take place between microorganisms, which belong to different genera and species. In this research, yeast and LAB were isolated from kefir samples obtained from Uludag University Food Engineering Department. The cell morphology of isolates was screened by microscopic examination. Gram reactions of bacteria isolates were determined by Gram staining method, and as well catalase activity was examined. After observing the microscopic/morphological and physical, enzymatic properties of all isolates, they were divided into the groups as LAB and/or yeast according to their physicochemical responses to the applied examinations. As part of this research, the antagonistic/synergistic efficacy of the identified five LAB and five yeast strains to each other were determined individually by disk diffusion method. The antagonistic or synergistic effect is one of the most important properties in a co-culture system that different microorganisms are living together. The synergistic effect should be promoted, whereas the antagonistic effect is prevented to provide effective culture for fermentation of kefir. The aim of this study was to determine microbial interactions between identified yeast and LAB strains, and whether their effect is antagonistic or synergistic. Thus, if there is a strain which inhibits or retards the growth of other strains found in Kefir microflora, this circumstance shows the presence of antagonistic effect in the medium. Such negative influence should be prevented, whereas the microorganisms which have synergistic effect on each other should be promoted by combining them in kefir grain. Standardisation is the most desired property for industrial production. Each microorganism found in the microbial flora of a kefir grain should be identified individually. The members of the microbial community found in the glue-like kefir grain may be redesigned as a starter culture regarding efficacy of each microorganism to another in kefir processing. The main aim of this research was to shed light on more effective production of kefir grain and to contribute a standardisation of kefir processing in the food industry.

Keywords: antagonistic effect, kefir, lactic acid bacteria (LAB), synergistic, yeast

Procedia PDF Downloads 280
9275 Leveraging Unannotated Data to Improve Question Answering for French Contract Analysis

Authors: Touila Ahmed, Elie Louis, Hamza Gharbi

Abstract:

State of the art question answering models have recently shown impressive performance especially in a zero-shot setting. This approach is particularly useful when confronted with a highly diverse domain such as the legal field, in which it is increasingly difficult to have a dataset covering every notion and concept. In this work, we propose a flexible generative question answering approach to contract analysis as well as a weakly supervised procedure to leverage unannotated data and boost our models’ performance in general, and their zero-shot performance in particular.

Keywords: question answering, contract analysis, zero-shot, natural language processing, generative models, self-supervision

Procedia PDF Downloads 194
9274 Deep Brain Stimulation and Motor Cortex Stimulation for Post-Stroke Pain: A Systematic Review and Meta-Analysis

Authors: Siddarth Kannan

Abstract:

Objectives: Deep Brain Stimulation (DBS) and Motor Cortex stimulation (MCS) are innovative interventions in order to treat various neuropathic pain disorders such as post-stroke pain. While each treatment has a varying degree of success in managing pain, comparative analysis has not yet been performed, and the success rates of these techniques using validated, objective pain scores have not been synthesised. The aim of this study was to compare the effect of pain relief offered by MCS and DBS on patients with post-stroke pain and to assess if either of these procedures offered better results. Methods: A systematic review and meta-analysis were conducted in accordance with PRISMA guidelines (PROSPEROID CRD42021277542). Three databases were searched, and articles published from 2000 to June 2023 were included (last search date 25 June 2023). Meta-analysis was performed using random effects models. We evaluated the performance of DBS or MCS by assessing studies that reported pain relief using the Visual Analogue Scale (VAS). Data analysis of descriptive statistics was performed using SPSS (Version 27; IBM; Armonk; NY; USA). R statistics (Rstudio Version 4.0.1) was used to perform meta-analysis. Results: Of the 478 articles identified, 27 were included in the analysis (232 patients- 117 DBS & 115 MCS). The pooled number of patients who improved after DBS was 0.68 (95% CI, 0.57-0.77, I2=36%). The pooled number of patients who improved after MCS was 0.72 (95% CI, 0.62-0.80, I2=59%). Further sensitivity analysis was done to include only studies with a minimum of 5 patients in order to assess if there was any impact on the overall results. Nine studies each for DBS and MCS met these criteria. There seemed to be no significant difference in results. Conclusions: The use of surgical interventions such as DBS and MCS is an upcoming field for the treatment of post-stroke pain, with limited studies exploring and comparing these two techniques. While our study shows that MCS might be a slightly better treatment option, further research would need to be done in order to determine the appropriate surgical intervention for post-stroke pain.

Keywords: post-stroke pain, deep brain stimulation, motor cortex stimulation, pain relief

Procedia PDF Downloads 139
9273 Non Performing Asset Variations across Indian Commercial Banks: Some Findings

Authors: Sanskriti Singh, Ankit Tomar

Abstract:

Banks are the instrument of growth of a country. Banks mobilize the savings of the public in the form of deposits and channelize it as advances for various activities required for the development of society at large. The advance which becomes unpaid for a certain period is called Non Performing Asset of the bank. The study makes an attempt to bring out the magnitude of NPA and its impact on profit, advances. An attempt is also made to bring out the challenges NPA poses to the banks and suggestions to overcome and to manage NPA effectively.

Keywords: India, NPAs, private banks, public banks

Procedia PDF Downloads 283
9272 The Correspondence between Self-regulated Learning, Learning Efficiency and Frequency of ICT Use

Authors: Maria David, Tunde A. Tasko, Katalin Hejja-Nagy, Laszlo Dorner

Abstract:

The authors have been concerned with research on learning since 1998. Recently, the focus of our interest is how prevalent use of information and communication technology (ICT) influences students' learning abilities, skills of self-regulated learning and learning efficiency. Nowadays, there are three dominant theories about the psychic effects of ICT use: According to social optimists, modern ICT devices have a positive effect on thinking. As to social pessimists, this effect is rather negative. And, regarding the views of biological optimists, the change is obvious, but these changes can fit into the mankind's evolved neurological system as did writing long ago. Mentality of 'digital natives' differ from that of elder people. They process information coming from the outside world in an other way, and different experiences result in different cerebral conformation. In this regard, researchers report about both positive and negative effects of ICT use. According to several studies, it has a positive effect on cognitive skills, intelligence, school efficiency, development of self-regulated learning, and self-esteem regarding learning. It is also proven, that computers improve skills of visual intelligence such as spacial orientation, iconic skills and visual attention. Among negative effects of frequent ICT use, researchers mention the decrease of critical thinking, as permanent flow of information does not give scope for deeper cognitive processing. Aims of our present study were to uncover developmental characteristics of self-regulated learning in different age groups and to study correlations of learning efficiency, the level of self-regulated learning and frequency of use of computers. Our subjects (N=1600) were primary and secondary school students and university students. We studied four age groups (age 10, 14, 18, 22), 400 subjects of each. We used the following methods: the research team developed a questionnaire for measuring level of self-regulated learning and a questionnaire for measuring ICT use, and we used documentary analysis to gain information about grade point average (GPA) and results of competence-measures. Finally, we used computer tasks to measure cognitive abilities. Data is currently under analysis, but as to our preliminary results, frequent use of computers results in shorter response time regarding every age groups. Our results show that an ordinary extent of ICT use tend to increase reading competence, and had a positive effect on students' abilities, though it didn't show relationship with school marks (GPA). As time passes, GPA gets worse along with the learning material getting more and more difficult. This phenomenon draws attention to the fact that students are unable to switch from guided to independent learning, so it is important to consciously develop skills of self-regulated learning.

Keywords: digital natives, ICT, learning efficiency, reading competence, self-regulated learning

Procedia PDF Downloads 361
9271 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study

Authors: Insiya Bhalloo

Abstract:

It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.

Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition

Procedia PDF Downloads 358
9270 Enhancement of Critical Current Density of Liquid Infiltration Processed Y-Ba-Cu-O Bulk Superconductors Used for Flywheel Energy Storage System

Authors: Asif Mahmood, Yousef Alzeghayer

Abstract:

The size effects of a precursor Y2BaCuO5 (Y211) powder on the microstructure and critical current density (Jc) of liquid infiltration growth (LIG)-processed YBa2Cu3O7-y (Y123) bulk superconductors were investigated in terms of milling time (t). YBCO bulk samples having high Jc values have been selected for the flywheel energy storage system. Y211 powders were attrition-milled for 0-10 h in 2 h increments at a fixed rotation speed of 400 RPM. Y211 pre-forms were made by pelletizing the milled Y211 powders followed by subsequent sintering, after which an LIG process with top seeding was applied to the Y211/Ba3Cu5O8 (Y035) pre-forms. Spherical pores were observed in all LIG-processed Y123 samples, and the pore density gradually decreased as t increased from 0 h to 8 h. In addition to the reduced pore density, the Y211 particle size in the final Y123 products also decreased with increasing t. As t increased further to 10 h, unexpected Y211 coarsening and large pore evolutions were observed. The magnetic susceptibility-temperature curves showed that the onset superconducting transition temperature (Tc, onset) of all samples was the same (91.5 K), but the transition width became greater as t increased. The Jc of the Y123 bulk superconductors fabricated in this study was observed to correlate well with t of the Y211 precursor powder. The maximum Jc of 1.0×105 A cm-2 (at 77 K, 0 T) was achieved at t = 8 h, which is attributed to the reduction in pore density and Y211 particle size. The prolonged milling time of t = 10 h decreased the Jc of the LIG-processed Y123 superconductor owing to the evolution of large pores and exaggerated Y211 growth. YBCO bulk samples having high Jc (samples prepared using 8 h milled powders) have been used for the energy storage system in flywheel energy storage system.

Keywords: critical current, bulk superconductor, liquid infiltration, bioinformatics

Procedia PDF Downloads 212
9269 Coronary Artery Calcium Score and Statin Treatment Effect on Myocardial Infarction and Major Adverse Cardiovascular Event of Atherosclerotic Cardiovascular Disease: A Systematic Review and Meta-Analysis

Authors: Yusra Pintaningrum, Ilma Fahira Basyir, Sony Hilal Wicaksono, Vito A. Damay

Abstract:

Background: Coronary artery calcium (CAC) scores play an important role in improving prognostic accuracy and can be selectively used to guide the allocation of statin therapy for atherosclerotic cardiovascular disease outcomes and potentially associated with the occurrence of MACE (Major Adverse Cardiovascular Event) and MI (Myocardial Infarction). Objective: This systematic review and meta-analysis aim to analyze the findings of a study about CAC Score and statin treatment effect on MI and MACE risk. Methods: Search for published scientific articles using the PRISMA (Preferred Reporting, Items for Systematic Reviews and Meta-Analysis) method conducted on PubMed, Cochrane Library, and Medline databases published in the last 20 years on “coronary artery calcium” AND “statin” AND “cardiovascular disease” Further systematic review and meta-analysis using RevMan version 5.4 were performed based on the included published scientific articles. Results: Based on 11 studies included with a total of 1055 participants, we performed a meta-analysis and found that individuals with CAC score > 0 increased risk ratio of MI 8.48 (RR = 9.48: 95% CI: 6.22 – 14.45) times and MACE 2.48 (RR = 3.48: 95% CI: 2.98 – 4.05) times higher than CAC score 0 individual. Statin compared against non-statin treatment showed a statistically insignificant overall effect on the risk of MI (P = 0.81) and MACE (P = 0.89) in an individual with elevated CAC score 1 – 100 (P = 0.65) and > 100 (P = 0.11). Conclusions: This study found that an elevated CAC scores individual has a higher risk of MI and MACE than a non-elevated CAC score individual. There is no significant effect of statin compared against non-statin treatment to reduce MI and MACE in elevated CAC score individuals of 1 – 100 or > 100.

Keywords: coronary artery calcium, statin, cardiovascular disease, myocardial infarction, MACE

Procedia PDF Downloads 101
9268 Electronic Mentoring: How Can It Be Used with Teachers?

Authors: Roberta Gentry

Abstract:

Electronic mentoring is defined as a relationship between a mentor and a mentee using computer mediated communication (CMC) that is intended to develop and improve mentee’s skills, confidence, and cultural understanding. This session will increase knowledge about electronic mentoring, its uses, and outcomes. The research behind electronic mentoring and descriptions of existing programs will also be shared.

Keywords: electronic mentoring, mentoring, beginning special educators, education

Procedia PDF Downloads 253
9267 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 248
9266 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis

Authors: Alexander A. Tokmakov

Abstract:

Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.

Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins

Procedia PDF Downloads 419
9265 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 228