Search results for: constant modulus algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6093

Search results for: constant modulus algorithm

873 Increasing Access to Upper Limb Reconstruction in Cervical Spinal Cord Injury

Authors: Michelle Jennett, Jana Dengler, Maytal Perlman

Abstract:

Background: Cervical spinal cord injury (SCI) is a devastating event that results in upper limb paralysis, loss of independence, and disability. People living with cervical SCI have identified improvement of upper limb function as a top priority. Nerve and tendon transfer surgery has successfully restored upper limb function in cervical SCI but is not universally used or available to all eligible individuals. This exploratory mixed-methods study used an implementation science approach to better understand these factors that influence access to upper limb reconstruction in the Canadian context and design an intervention to increase access to care. Methods: Data from the Canadian Institute for Health Information’s Discharge Abstracts Database (CIHI-DAD) and the National Ambulatory Care Reporting System (NACRS) were used to determine the annual rate of nerve transfer and tendon transfer surgeries performed in cervical SCI in Canada over the last 15 years. Semi-structured interviews informed by the consolidated framework for implementation research (CFIR) were used to explore Ontario healthcare provider knowledge and practices around upper limb reconstruction. An inductive, iterative constant comparative process involving descriptive and interpretive analyses was used to identify themes that emerged from the data. Results: Healthcare providers (n = 10 upper extremity surgeons, n = 10 SCI physiatrists, n = 12 physical and occupational therapists working with individuals with SCI) were interviewed about their knowledge and perceptions of upper limb reconstruction and their current practices and discussions around upper limb reconstruction. Data analysis is currently underway and will be presented. Regional variation in rates of upper limb reconstruction and trends over time are also currently being analyzed. Conclusions: Utilization of nerve and tendon transfer surgery to improve upper limb reconstruction in Canada remains low. There are a complex array of interrelated individual-, provider- and system-level barriers that prevent individuals with cervical SCI from accessing upper limb reconstruction. In order to offer equitable access to care, a multi-modal approach addressing current barriers is required.

Keywords: cervical spinal cord injury, nerve and tendon transfer surgery, spinal cord injury, upper extremity reconstruction

Procedia PDF Downloads 80
872 Case Study: Optimization of Contractor’s Financing through Allocation of Subcontractors

Authors: Helen S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

In many countries, the construction industry relies heavily on outsourcing models in executing their projects and expanding their businesses to fit in the diverse market. Such extensive integration of subcontractors is becoming an influential factor in contractor’s cash flow management. Accordingly, subcontractors’ financial terms are important phenomena and pivotal components for the well-being of the contractor’s cash flow. The aim of this research is to study the contractor’s cash flow with respect to the owner and subcontractor’s payment management plans, considering variable advance payment, payment frequency, and lag and retention policies. The model is developed to provide contractors with a decision support tool that can assist in selecting the optimum subcontracting plan to minimize the contractor’s financing limits and optimize the profit values. The model is built using Microsoft Excel VBA coding, and the genetic algorithm is utilized as the optimization tool. Three objective functions are investigated, which are minimizing the highest negative overdraft value, minimizing the net present worth of overdraft, and maximizing the project net profit. The model is validated on a full-scale project which includes both self-performed and subcontracted work packages. The results show potential outputs in optimizing the contractor’s negative cash flow values and, in the meantime, assisting contractors in selecting suitable subcontractors to achieve the objective function.

Keywords: cash flow optimization, payment plan, procurement management, subcontracting plan

Procedia PDF Downloads 99
871 Enhanced Model for Risk-Based Assessment of Employee Security with Bring Your Own Device Using Cyber Hygiene

Authors: Saidu I. R., Shittu S. S.

Abstract:

As the trend of personal devices accessing corporate data continues to rise through Bring Your Own Device (BYOD) practices, organizations recognize the potential cost reduction and productivity gains. However, the associated security risks pose a significant threat to these benefits. Often, organizations adopt BYOD environments without fully considering the vulnerabilities introduced by human factors in this context. This study presents an enhanced assessment model that evaluates the security posture of employees in BYOD environments using cyber hygiene principles. The framework assesses users' adherence to best practices and guidelines for maintaining a secure computing environment, employing scales and the Euclidean distance formula. By utilizing this algorithm, the study measures the distance between users' security practices and the organization's optimal security policies. To facilitate user evaluation, a simple and intuitive interface for automated assessment is developed. To validate the effectiveness of the proposed framework, design science research methods are employed, and empirical assessments are conducted using five artifacts to analyze user suitability in BYOD environments. By addressing the human factor vulnerabilities through the assessment of cyber hygiene practices, this study aims to enhance the overall security of BYOD environments and enable organizations to leverage the advantages of this evolving trend while mitigating potential risks.

Keywords: security, BYOD, vulnerability, risk, cyber hygiene

Procedia PDF Downloads 51
870 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System

Authors: Zhang Kehan, Du Luona

Abstract:

Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.

Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer

Procedia PDF Downloads 318
869 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea

Authors: M. Y. Maila, H. P. Makhubele

Abstract:

Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.

Keywords: dry roasting, legume, oil source, peanuts

Procedia PDF Downloads 259
868 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational

Authors: Marco Sewald

Abstract:

Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.

Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating

Procedia PDF Downloads 190
867 Non-Invasive Pre-Implantation Genetic Assessment Using NGS in IVF Clinical Routine

Authors: Katalin Gombos, Bence Gálik, Krisztina Ildikó Kalács, Krisztina Gödöny, Ákos Várnagy, József Bódis, Attila Gyenesei, Gábor L. Kovács

Abstract:

Although non-invasive pre-implantation genetic testing for aneuploidy (NIPGT-A) is potentially appropriate to assess chromosomal ploidy of the embryo, practical application of it in a routine IVF center has not been started in the absence of a recommendation. We developed a comprehensive workflow for a clinically applicable strategy for NIPGT-A based on next-generation sequencing (NGS) technology. We performed MALBAC whole genome amplification and NGS on spent blastocyst culture media of Day 3 embryos fertilized with intra-cytoplasmic sperm injection (ICSI). Spent embryonic culture media of morphologically good quality score embryos were enrolled in further analysis with the blank culture media as background control. Chromosomal abnormalities were identified by an optimized bioinformatics pipeline applying a copy number variation (CNV) detecting algorithm. We demonstrate a comprehensive workflow covering both wet- and dry-lab procedures supporting a clinically applicable strategy for NIPGT-A. It can be carried out within 48 h which is critical for the same-cycle blastocyst transfer, but also suitable for “freeze all” and “elective frozen embryo” strategies. The described integrated approach of non-invasive evaluation of embryonic DNA content of the culture media can potentially supplement existing pre-implantation genetic screening methods.

Keywords: next generation sequencing, in vitro fertilization, embryo assessment, non-invasive pre-implantation genetic testing

Procedia PDF Downloads 136
866 Singular Perturbed Vector Field Method Applied to the Problem of Thermal Explosion of Polydisperse Fuel Spray

Authors: Ophir Nave

Abstract:

In our research, we present the concept of singularly perturbed vector field (SPVF) method, and its application to thermal explosion of diesel spray combustion. Given a system of governing equations, which consist of hidden Multi-scale variables, the SPVF method transfer and decompose such system to fast and slow singularly perturbed subsystems (SPS). The SPVF method enables us to understand the complex system, and simplify the calculations. Later powerful analytical, numerical and asymptotic methods (e.g method of integral (invariant) manifold (MIM), the homotopy analysis method (HAM) etc.) can be applied to each subsystem. We compare the results obtained by the methods of integral invariant manifold and SPVF apply to spray droplets combustion model. The research deals with the development of an innovative method for extracting fast and slow variables in physical mathematical models. The method that we developed called singular perturbed vector field. This method based on a numerical algorithm applied to global quasi linearization applied to given physical model. The SPVF method applied successfully to combustion processes. Our results were compared to experimentally results. The SPVF is a general numerical and asymptotical method that reveals the hierarchy (multi-scale system) of a given system.

Keywords: polydisperse spray, model reduction, asymptotic analysis, multi-scale systems

Procedia PDF Downloads 199
865 An Experimental Study of Scalar Implicature Processing in Chinese

Authors: Liu Si, Wang Chunmei, Liu Huangmei

Abstract:

A prominent component of the semantic versus pragmatic debate, scalar implicature (SI) has been gaining great attention ever since it was proposed by Horn. The constant debate is between the structural and pragmatic approach. The former claims that generation of SI is costless, automatic, and dependent mostly on the structural properties of sentences, whereas the latter advocates both that such generation is largely dependent upon context, and that the process is costly. Many experiments, among which Katsos’s text comprehension experiments are influential, have been designed and conducted in order to verify their views, but the results are not conclusive. Besides, most of the experiments were conducted in English language materials. Katsos conducted one off-line and three on-line text comprehension experiments, in which the previous shortcomings were addressed on a certain extent and the conclusion was in favor of the pragmatic approach. We intend to test the results of Katsos’s experiment in Chinese scalar implicature. Four experiments in both off-line and on-line conditions to examine the generation and response time of SI in Chinese "yixie" (some) and "quanbu (dou)" (all) will be conducted in order to find out whether the structural or the pragmatic approach could be sustained. The study mainly aims to answer the following questions: (1) Can SI be generated in the upper- and lower-bound contexts as Katsos confirmed when Chinese language materials are used in the experiment? (2) Can SI be first generated, then cancelled as default view claimed or can it not be generated in a neutral context when Chinese language materials are used in the experiment? (3) Is SI generation costless or costly in terms of processing resources? (4) In line with the SI generation process, what conclusion can be made about the cognitive processing model of language meaning? Is it a parallel model or a linear model? Or is it a dynamic and hierarchical model? According to previous theoretical debates and experimental conflicts, presumptions could be made that SI, in Chinese language, might be generated in the upper-bound contexts. Besides, the response time might be faster in upper-bound than that found in lower-bound context. SI generation in neutral context might be the slowest. At last, a conclusion would be made that the processing model of SI could not be verified by either absolute structural or pragmatic approaches. It is, rather, a dynamic and complex processing mechanism, in which the interaction of language forms, ad hoc context, mental context, background knowledge, speakers’ interaction, etc. are involved.

Keywords: cognitive linguistics, pragmatics, scalar implicture, experimental study, Chinese language

Procedia PDF Downloads 341
864 The Impact of Task Type and Group Size on Dialogue Argumentation between Students

Authors: Nadia Soledad Peralta

Abstract:

Within the framework of socio-cognitive interaction, argumentation is understood as a psychological process that supports and induces reasoning and learning. Most authors emphasize the great potential of argumentation to negotiate with contradictions and complex decisions. So argumentation is a target for researchers who highlight the importance of social and cognitive processes in learning. In the context of social interaction among university students, different types of arguments are analyzed according to group size (dyads and triads) and the type of task (reading of frequency tables, causal explanation of physical phenomena, the decision regarding moral dilemma situations, and causal explanation of social phenomena). Eighty-nine first-year social sciences students of the National University of Rosario participated. Two groups were formed from the results of a pre-test that ensured the heterogeneity of points of view between participants. Group 1 consisted of 56 participants (performance in dyads, total: 28), and group 2 was formed of 33 participants (performance in triads, total: 11). A quasi-experimental design was performed in which effects of the two variables (group size and type of task) on the argumentation were analyzed. Three types of argumentation are described: authentic dialogical argumentative resolutions, individualistic argumentative resolutions, and non-argumentative resolutions. The results indicate that individualistic arguments prevail in dyads. That is, although people express their own arguments, there is no authentic argumentative interaction. Given that, there are few reciprocal evaluations and counter-arguments in dyads. By contrast, the authentically dialogical argument prevails in triads, showing constant feedback between participants’ points of view. It was observed that, in general, the type of task generates specific types of argumentative interactions. However, it is possible to emphasize that the authentically dialogic arguments predominate in the logical tasks, whereas the individualists or pseudo-dialogical are more frequent in opinion tasks. Nerveless, these relationships between task type and argumentative mode are best clarified in an interactive analysis based on group size. Finally, it is important to stress the value of dialogical argumentation in educational domains. Argumentative function not only allows a metacognitive reflection about their own point of view but also allows people to benefit from exchanging points of view in interactive contexts.

Keywords: sociocognitive interaction, argumentation, university students, size of the grup

Procedia PDF Downloads 59
863 Synthesis of Carbonyl Iron Particles Modified with Poly (Trimethylsilyloxyethyl Methacrylate) Nano-Grafts

Authors: Martin Cvek, Miroslav Mrlik, Michal Sedlacik, Tomas Plachy

Abstract:

Magnetorheological elastomers (MREs) are multi-phase composite materials containing micron-sized ferromagnetic particles dispersed in an elastomeric matrix. Their properties such as modulus, damping, magneto-striction, and electrical conductivity can be controlled by an external magnetic field and/or pressure. These features of the MREs are used in the development of damping devices, shock attenuators, artificial muscles, sensors or active elements of electric circuits. However, imperfections on the particle/matrix interfaces result in the lower performance of the MREs when compared with theoretical values. Moreover, magnetic particles are susceptible to corrosion agents such as acid rains or sea humidity. Therefore, the modification of particles is an effective tool for the improvement of MRE performance due to enhanced compatibility between particles and matrix as well as improvements of their thermo-oxidation and chemical stability. In this study, the carbonyl iron (CI) particles were controllably modified with poly(trimethylsilyloxyethyl methacrylate) (PHEMATMS) nano-grafts to develop magnetic core–shell structures exhibiting proper wetting with various elastomeric matrices resulting in improved performance within a frame of rheological, magneto-piezoresistance, pressure-piezoresistance, or radio-absorbing properties. The desired molecular weight of PHEMATMS nano-grafts was precisely tailored using surface-initiated atom transfer radical polymerization (ATRP). The CI particles were firstly functionalized using a 3-aminopropyltriethoxysilane agent, followed by esterification reaction with α-bromoisobutyryl bromide. The ATRP was performed in the anisole medium using ethyl α-bromoisobutyrate as a macroinitiator, N, N´, N´´, N´´-pentamethyldiethylenetriamine as a ligand, and copper bromide as an initiator. To explore the effect PHEMATMS molecular weights on final properties, two variants of core-shell structures with different nano-graft lengths were synthesized, while the reaction kinetics were designed through proper reactant feed ratios and polymerization times. The PHEMATMS nano-grafts were characterized by nuclear magnetic resonance and gel permeation chromatography proving information to their monomer conversions, molecular chain lengths, and low polydispersity indexes (1.28 and 1.35) as the results of the executed ATRP. The successful modifications were confirmed via Fourier transform infrared- and energy-dispersive spectroscopies while expected wavenumber outputs and element presences, respectively, of constituted PHEMATMS nano-grafts, were occurring in the spectra. The surface morphology of bare CI and their PHEMATMS-grafted analogues was further studied by scanning electron microscopy, and the thicknesses of grafted polymeric layers were directly observed by transmission electron microscopy. The contact angles as a measure of particle/matrix compatibility were investigated employing the static sessile drop method. The PHEMATMS nano-grafts enhanced compatibility of hydrophilic CI with low-surface-energy hydrophobic polymer matrix in terms of their wettability and dispersibility in an elastomeric matrix. Thus, the presence of possible defects at the particle/matrix interface is reduced, and higher performance of modified MREs is expected.

Keywords: atom transfer radical polymerization, core-shell, particle modification, wettability

Procedia PDF Downloads 179
862 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana

Authors: Jemima C. A. Sumboh

Abstract:

Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.

Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage

Procedia PDF Downloads 111
861 An Investigation of the Relationship Between Privacy Crisis, Public Discourse on Privacy, and Key Performance Indicators at Facebook (2004–2021)

Authors: Prajwal Eachempati, Laurent Muzellec, Ashish Kumar Jha

Abstract:

We use Facebook as a case study to investigate the complex relationship between the firm’s public discourse (and actions) surrounding data privacy and the performance of a business model based on monetizing user’s data. We do so by looking at the evolution of public discourse over time (2004–2021) and relate topics to revenue and stock market evolution Drawing from archival sources like Zuckerberg We use LDA topic modelling algorithm to reveal 19 topics regrouped in 6 major themes. We first show how, by using persuasive and convincing language that promises better protection of consumer data usage, but also emphasizes greater user control over their own data, the privacy issue is being reframed as one of greater user control and responsibility. Second, we aim to understand and put a value on the extent to which privacy disclosures have a potential impact on the financial performance of social media firms. There we found significant relationship between the topics pertaining to privacy and social media/technology, sentiment score and stock market prices. Revenue is found to be impacted by topics pertaining to politics and new product and service innovations while number of active users is not impacted by the topics unless moderated by external control variables like Return on Assets and Brand Equity.

Keywords: public discourses, data protection, social media, privacy, topic modeling, business models, financial performance

Procedia PDF Downloads 73
860 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis

Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork

Abstract:

The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.

Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia

Procedia PDF Downloads 92
859 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 282
858 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field

Authors: Buruk Kitachew Wossenyeleh

Abstract:

Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.

Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation

Procedia PDF Downloads 133
857 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 254
856 A Fine String between Weaving the Text and Patching It: Reading beyond the Hidden Symbols and Antithetical Relationships in the Classical and Modern Arabic Poetry

Authors: Rima Abu Jaber-Bransi, Rawya Jarjoura Burbara

Abstract:

This study reveals the extension and continuity between the classical Arabic poetry and modern Arabic poetry through investigation of its ambiguity, symbolism, and antithetical relationships. The significance of this study lies in its exploration and discovering of a new method of reading classical and modern Arabic poetry. The study deals with the Fatimid poetry and discovers a new method to read it. It also deals with the relationship between the apparent and the hidden meanings of words through focusing on how the paradoxical antithetical relationships change the meaning of the whole poem and give it a different dimension through the use of Oxymorons. In our unprecedented research on Oxymoron, we found out that the words in modern Arabic poetry are used in unusual combinations that convey apparent and hidden meanings. In some cases, the poet introduces an image with a symbol of a certain thing, but the reader soon discovers that the symbol includes its opposite, too. The question is: How does the reader find that hidden harmony in that apparent disharmony? The first and most important conclusion of this study is that the Fatimid poetry was written for two types of readers: religious readers who know the religious symbols and the hidden secret meanings behind the words, and ordinary readers who understand the apparent literal meaning of the words. Consequently, the interpretation of the poem is subject to the type of reading. In Fatimid poetry we found out that the hunting-journey is a journey of hidden esoteric knowledge; the Horse is al-Naqib, a religious rank of the investigator and missionary; the Lion is Ali Ibn Abi Talib. The words black and white, day and night, bird, death and murder have different meanings and indications. Our study points out the importance of reading certain poems in certain periods in two different ways: the first depends on a doctrinal interpretation that transforms the external apparent (ẓāher) meanings into internal inner hidden esoteric (bāṭen) ones; the second depends on the interpretation of antithetical relationships between the words in order to reveal meanings that the poet hid for a reader who participates in the processes of creativity. The second conclusion is that the classical poem employed symbols, oxymora and antonymous and antithetical forms to create two poetic texts in one mold and form. We can conclude that this study is pioneering in showing the constant paradoxical relationship between the apparent and the hidden meanings in classical and modern Arabic poetry.

Keywords: apparent, symbol, hidden, antithetical, oxymoron, Sophism, Fatimid poetry

Procedia PDF Downloads 238
855 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 92
854 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 438
853 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process

Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian

Abstract:

The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.

Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass

Procedia PDF Downloads 386
852 Fabrication of Al/Al2O3 Functionally Graded Composites via Centrifugal Method by Using a Polymeric Suspension

Authors: Majid Eslami

Abstract:

Functionally graded materials (FGMs) exhibit heterogeneous microstructures in which the composition and properties gently change in specified directions. The common type of FGMs consist of a metal in which ceramic particles are distributed with a graded concentration. There are many processing routes for FGMs. An important group of these methods is casting techniques (gravity or centrifugal). However, the main problem of casting molten metal slurry with dispersed ceramic particles is a destructive chemical reaction between these two phases which deteriorates the properties of the materials. In order to overcome this problem, in the present investigation a suspension of 6061 aluminum and alumina powders in a liquid polymer was used as the starting material and subjected to centrifugal force for making FGMs. The size rang of these powders was 45-63 and 106-125 μm. The volume percent of alumina in the Al/Al2O3 powder mixture was in the range of 5 to 20%. PMMA (Plexiglas) in different concentrations (20-50 g/lit) was dissolved in toluene and used as the suspension liquid. The glass mold contaning the suspension of Al/Al2O3 powders in the mentioned liquid was rotated at 1700 rpm for different times (4-40 min) while the arm length was kept constant (10 cm) for all the experiments. After curing the polymer, burning out the binder, cold pressing and sintering , cylindrical samples (φ=22 mm h=20 mm) were produced. The density of samples before and after sintering was quantified by Archimedes method. The results indicated that by using the same sized alumina and aluminum powders particles, FGM sample can be produced by rotation times exceeding 7 min. However, by using coarse alumina and fine alumina powders the sample exhibits step concentration. On the other hand, using fine alumina and coarse alumina results in a relatively uniform concentration of Al2O3 along the sample height. These results are attributed to the effects of size and density of different powders on the centrifugal force induced on the powders during rotation. The PMMA concentration and the vol.% of alumina in the suspension did not have any considerable effect on the distribution of alumina particles in the samples. The hardness profiles along the height of samples were affected by both the alumina vol.% and porosity content. The presence of alumina particles increased the hardness while increased porosity reduced the hardness. Therefore, the hardness values did not show the expected gradient in same sample. The sintering resulted in decreased porosity for all the samples investigated.

Keywords: FGM, powder metallurgy, centrifugal method, polymeric suspension

Procedia PDF Downloads 192
851 Designing and Prototyping Permanent Magnet Generators for Wind Energy

Authors: T. Asefi, J. Faiz, M. A. Khan

Abstract:

This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.

Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms

Procedia PDF Downloads 123
850 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 103
849 Correlation Between the Toxicity Grade of the Adverse Effects in the Course of the Immunotherapy of Lung Cancer and Efficiency of the Treatment in Anti-PD-L1 and Anti-PD-1 Drugs - Own Clinical Experience

Authors: Anna Rudzińska, Katarzyna Szklener, Pola Juchaniuk, Anna Rodzajweska, Katarzyna Machulska-Ciuraj, Monika Rychlik- Grabowska, Michał łOziński, Agnieszka Kolak-Bruks, SłAwomir Mańdziuk

Abstract:

Introduction: Immune checkpoint inhibition (ICI) belongs to the modern forms of anti-cancer treatment. Due to the constant development and continuous research in the field of ICI, many aspects of the treatment are yet to be discovered. One of the less researched aspects of ICI treatment is the influence of the adverse effects on the treatment success rate. It is suspected that adverse events in the course of the ICI treatment indicate a better response rate and correlate with longer progression-free- survival. Methodology: The research was conducted with the usage of the documentation of the Department of Clinical Oncology and Chemotherapy. Data of the patients with a lung cancer diagnosis who were treated between 2019-2022 and received ICI treatment were analyzed. Results: Out of over 133 patients whose data was analyzed, the vast majority were diagnosed with non-small cell lung cancer. The majority of the patients did not experience adverse effects. Most adverse effects reported were classified as grade 1 or grade 2 according to CTCAE classification. Most adverse effects involved skin, thyroid and liver toxicity. Statistical significance was found for the adverse effect incidence and overall survival (OS) and progression-free survival (PFS) (p=0,0263) and for the time of toxicity onset and OS and PFS (p<0,001). The number of toxicity sites was statistically significant for prolonged PFS (p=0.0315). The highest OS was noted in the group presenting grade 1 and grade 2 adverse effects. Conclusions: Obtained results confirm the existence of the prolonged OS and PFS in the adverse-effects-charged patients, mostly in the group presenting mild to intermediate (Grade 1 and Grade 2) adverse effects and late toxicity onset. Simultaneously our results suggest a correlation between treatment response rate and the toxicity grade of the adverse effects and the time of the toxicity onset. Similar results were obtained in several similar research conducted - with the proven tendency of better survival in mild and moderate toxicity; meanwhile, other studies in the area suggested an advantage in patients with any toxicity regardless of the grade. The contradictory results strongly suggest the need for further research on this topic, with a focus on additional factors influencing the course of the treatment.

Keywords: adverse effects, immunotherapy, lung cancer, PD-1/PD-L1 inhibitors

Procedia PDF Downloads 66
848 Production of Rhamnolipids from Different Resources and Estimating the Kinetic Parameters for Bioreactor Design

Authors: Olfat A. Mohamed

Abstract:

Rhamnolipids biosurfactants have distinct properties given them importance in many industrial applications, especially their great new future applications in cosmetic and pharmaceutical industries. These applications have encouraged the search for diverse and renewable resources to control the cost of production. The experimental results were then applied to find a suitable mathematical model for obtaining the design criteria of the batch bioreactor. This research aims to produce Rhamnolipids from different oily wastewater sources such as petroleum crude oil (PO) and vegetable oil (VO) by using Pseudomonas aeruginosa ATCC 9027. Different concentrations of the PO and the VO are added to the media broth separately are in arrangement (0.5 1, 1.5, 2, 2.5 % v/v) and (2, 4, 6, 8 and 10%v/v). The effect of the initial concentration of oil residues and the addition of glycerol and palmitic acid was investigated as an inducer in the production of rhamnolipid and the surface tension of the broth. It was found that 2% of the waste (PO) and 6% of the waste (VO) was the best initial substrate concentration for the production of rhamnolipids (2.71, 5.01 g rhamnolipid/l) as arrangement. Addition of glycerol (10-20% v glycerol/v PO) to the 2% PO fermentation broth led to increase the rhamnolipid production (about 1.8-2 times fold). However, the addition of palmitic acid (5 and 10 g/l) to fermentation broth contained 6% VO rarely enhanced the production rate. The experimental data for 2% initially (PO) was used to estimate the various kinetic parameters. The following results were obtained, maximum rate or velocity of reaction (Vmax) = 0.06417 g/l.hr), yield of cell weight per unit weight of substrate utilized (Yx/s = 0.324 g Cx/g Cs) maximum specific growth rate (μmax = 0.05791 hr⁻¹), yield of rhamnolipid weight per unit weight of substrate utilized (Yp/s)=0.2571gCp/g Cs), maintenance coefficient (Ms =0.002419), Michaelis-Menten constant, (Km=6.1237 gmol/l), endogenous decay coefficient (Kd=0.002375 hr⁻¹). Predictive parameters and advanced mathematical models were applied to evaluate the time of the batch bioreactor. The results were as follows: 123.37, 129 and 139.3 hours in respect of microbial biomass, substrate and product concentration, respectively compared with experimental batch time of 120 hours in all cases. The expected mathematical models are compatible with the laboratory results and can, therefore, be considered as tools for expressing the actual system.

Keywords: batch bioreactor design, glycerol, kinetic parameters, petroleum crude oil, Pseudomonas aeruginosa, rhamnolipids biosurfactants, vegetable oil

Procedia PDF Downloads 110
847 Investigating Non-suicidal Self-Injury Discussions on Twitter

Authors: Muhammad Abubakar Alhassan, Diane Pennington

Abstract:

Social networking sites have become a space for people to discuss public health issues such as non-suicidal self-injury (NSSI). There are thousands of tweets containing self-harm and self-injury hashtags on Twitter. It is difficult to distinguish between different users who participate in self-injury discussions on Twitter and how their opinions change over time. Also, it is challenging to understand the topics surrounding NSSI discussions on Twitter. We retrieved tweets using #selfham and #selfinjury hashtags and investigated those from the United kingdom. We applied inductive coding and grouped tweeters into different categories. This study used the Latent Dirichlet Allocation (LDA) algorithm to infer the optimum number of topics that describes our corpus. Our findings revealed that many of those participating in NSSI discussions are non-professional users as opposed to medical experts and academics. Support organisations, medical teams, and academics were campaigning positively on rais-ing self-injury awareness and recovery. Using LDAvis visualisation technique, we selected the top 20 most relevant terms from each topic and interpreted the topics as; children and youth well-being, self-harm misjudgement, mental health awareness, school and mental health support and, suicide and mental-health issues. More than 50% of these topics were discussed in England compared to Scotland, Wales, Ireland and Northern Ireland. Our findings highlight the advantages of using the Twitter social network in tackling the problem of self-injury through awareness. There is a need to study the potential risks associated with the use of social networks among self-injurers.

Keywords: self-harm, non-suicidal self-injury, Twitter, social networks

Procedia PDF Downloads 110
846 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation

Authors: Lae-Jeong Park

Abstract:

The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.

Keywords: pedestrian detection, color segmentation, false positive, feature extraction

Procedia PDF Downloads 257
845 Comparative Performance Analysis for Selected Behavioral Learning Systems versus Ant Colony System Performance: Neural Network Approach

Authors: Hassan M. H. Mustafa

Abstract:

This piece of research addresses an interesting comparative analytical study. Which considers two concepts of diverse algorithmic computational intelligence approaches related tightly with Neural and Non-Neural Systems. The first algorithmic intelligent approach concerned with observed obtained practical results after three neural animal systems’ activities. Namely, they are Pavlov’s, and Thorndike’s experimental work. Besides a mouse’s trial during its movement inside figure of eight (8) maze, to reach an optimal solution for reconstruction problem. Conversely, second algorithmic intelligent approach originated from observed activities’ results for Non-Neural Ant Colony System (ACS). These results obtained after reaching an optimal solution while solving Traveling Sales-man Problem (TSP). Interestingly, the effect of increasing number of agents (either neurons or ants) on learning performance shown to be similar for both introduced systems. Finally, performance of both intelligent learning paradigms shown to be in agreement with learning convergence process searching for least mean square error LMS algorithm. While its application for training some Artificial Neural Network (ANN) models. Accordingly, adopted ANN modeling is a relevant and realistic tool to investigate observations and analyze performance for both selected computational intelligence (biological behavioral learning) systems.

Keywords: artificial neural network modeling, animal learning, ant colony system, traveling salesman problem, computational biology

Procedia PDF Downloads 447
844 Optimal Wind Based DG Placement Considering Monthly Changes Modeling in Wind Speed

Authors: Belal Mohamadi Kalesar, Raouf Hasanpour

Abstract:

Proper placement of Distributed Generation (DG) units such as wind turbine generators in distribution system are still very challenging issue for obtaining their maximum potential benefits because inappropriate placement may increase the system losses. This paper proposes Particle Swarm Optimization (PSO) technique for optimal placement of wind based DG (WDG) in the primary distribution system to reduce energy losses and voltage profile improvement with four different wind levels modeling in year duration. Also, wind turbine is modeled as a DFIG that will be operated at unity power factor and only one wind turbine tower will be considered to install at each bus of network. Finally, proposed method will be implemented on widely used 69 bus power distribution system in MATLAB software environment under four scenario (without, one, two and three WDG units) and for capability test of implemented program it is supposed that all buses of standard system can be candidate for WDG installing (large search space), though this program can consider predetermined number of candidate location in WDG placement to model financial limitation of project. Obtained results illustrate that wind speed increasing in some months will increase output power generated but this can increase / decrease power loss in some wind level, also results show that it is required about 3MW WDG capacity to install in different buses but when this is distributed in overall network (more number of WDG) it can cause better solution from point of view of power loss and voltage profile.

Keywords: wind turbine, DG placement, wind levels effect, PSO algorithm

Procedia PDF Downloads 431