Search results for: perceptual linear prediction (PLP’s)
503 Numerical Evaluation of Lateral Bearing Capacity of Piles in Cement-Treated Soils
Authors: Reza Ziaie Moayed, Saeideh Mohammadi
Abstract:
Soft soil is used in many of civil engineering projects like coastal, marine and road projects. Because of low shear strength and stiffness of soft soils, large settlement and low bearing capacity will occur under superstructure loads. This will make the civil engineering activities more difficult and costlier. In the case of soft soils, improvement is a suitable method to increase the shear strength and stiffness for engineering purposes. In recent years, the artificial cementation of soil by cement and lime has been extensively used for soft soil improvement. Cement stabilization is a well-established technique for improving soft soils. Artificial cementation increases the shear strength and hardness of the natural soils. On the other hand, in soft soils, the use of piles to transfer loads to the depths of ground is usual. By using cement treated soil around the piles, high bearing capacity and low settlement in piles can be achieved. In the present study, lateral bearing capacity of short piles in cemented soils is investigated by numerical approach. For this purpose, three dimensional (3D) finite difference software, FLAC 3D is used. Cement treated soil has a strain hardening-softening behavior, because of breaking of bonds between cement agent and soil particle. To simulate such behavior, strain hardening-softening soil constitutive model is used for cement treated soft soil. Additionally, conventional elastic-plastic Mohr Coulomb constitutive model and linear elastic model are used for stress-strain behavior of natural soils and pile. To determine the parameters of constitutive models and also for verification of numerical model, the results of available triaxial laboratory tests on and insitu loading of piles in cement treated soft soil are used. Different parameters are considered in parametric study to determine the effective parameters on the bearing of the piles on cemented treated soils. In the present paper, the effect of various length and height of the artificial cemented area, different diameter and length of the pile and the properties of the materials are studied. Also, the effect of choosing a constitutive model for cemented treated soils in the bearing capacity of the pile is investigated.Keywords: bearing capacity, cement-treated soils, FLAC 3D, pile
Procedia PDF Downloads 126502 A Research on the Effect of Soil-Structure Interaction on the Dynamic Response of Symmetrical Reinforced Concrete Buildings
Authors: Adinew Gebremeskel Tizazu
Abstract:
The effect of soil-structure interaction on the dynamic response of reinforced concrete buildings of regular and symmetrical geometry are considered in this study. The structures are presumed to be generally embedded in a homogenous soil formation underlain by very stiff material or bedrock. The structure-foundation–soil system is excited at the base by an earthquake ground motion. The superstructure is idealized as a system with lumped masses concentrated at the floor levels, and coupled with the substructure. The substructure system, which comprises of the foundation and soil, is represented, and replaced by springs and dashpots. Frequency-dependent impedances of the foundation system are incorporated in the discrete model in terms of the springs and dashpots coefficients. The excitation applied to the model is field ground motions of actual earthquake records. Modal superposition principle is employed to transform the equations of motion in geometrical coordinates to modal coordinates. However, the modal equations remain coupled with respect to damping terms due to the difference in damping mechanisms of the superstructure and the soil. Hence, proportional damping for the coupled structural system may not be assumed. An iterative approach is adopted and programmed to solve the system of coupled equations of motion in modal coordinates to obtain the displacement responses of the system. Parametric studies for responses of building structures with regular and symmetric plans of different structural properties and heights are made for fixed and flexible base conditions, for different soil conditions encountered in Addis Ababa. The displacement, base shear and base overturning moments are used in the comparison of different types of structures for various foundation embedment depths, site conditions and height of structures. These values are compared against those of fixed base structure. The study shows that the flexible base structures, generally exhibit different responses from those structures with fixed base. Basically, the natural circular frequencies, the base shears and the inter-story displacements for the flexible base are less than those of the fixed base structures. This trend is particularly evident when the flexible soil has large thickness. In contrast, the trend becomes less predictable, when the thickness of the flexible soil decreases. Moreover, in the latter case, the iteration undulates significantly making the prediction difficult. This is attributed to the highly jagged nature of the impedance functions of frequencies for such formations. In this case, it is difficult to conclude whether the conventional fixed-base approach yields conservative design forces, as is the case for soil formations of large thickness.Keywords: effect of soil structure, dynamic response corroborated, the modal superposition principle, parametric studies
Procedia PDF Downloads 33501 Algae Biofertilizers Promote Sustainable Food Production and Nutrient Efficiency: An Integrated Empirical-Modeling Study
Authors: Zeenat Rupawalla, Nicole Robinson, Susanne Schmidt, Sijie Li, Selina Carruthers, Elodie Buisset, John Roles, Ben Hankamer, Juliane Wolf
Abstract:
Agriculture has radically changed the global biogeochemical cycle of nitrogen (N). Fossil fuel-enabled synthetic N-fertiliser is a foundation of modern agriculture but applied to soil crops only use about half of it. To address N-pollution from cropping and the large carbon and energy footprint of N-fertiliser synthesis, new technologies delivering enhanced energy efficiency, decarbonisation, and a circular nutrient economy are needed. We characterised algae fertiliser (AF) as an alternative to synthetic N-fertiliser (SF) using empirical and modelling approaches. We cultivated microalgae in nutrient solution and modelled up-scaled production in nutrient-rich wastewater. Over four weeks, AF released 63.5% of N as ammonium and nitrate, and 25% of phosphorous (P) as phosphate to the growth substrate, while SF released 100% N and 20% P. To maximise crop N-use and minimise N-leaching, we explored AF and SF dose-response-curves with spinach in glasshouse conditions. AF-grown spinach produced 36% less biomass than SF-grown plants due to AF’s slower and linear N-release, while SF resulted in 5-times higher N-leaching loss than AF. Optimised blends of AF and SF boosted crop yield and minimised N-loss due to greater synchrony of N-release and crop uptake. Additional benefits of AF included greener leaves, lower leaf nitrate concentration, and higher microbial diversity and water holding capacity in the growth substrate. Life-cycle-analysis showed that replacing the most effective SF dosage with AF lowered the carbon footprint of fertiliser production from 2.02 g CO₂ (C-producing) to -4.62 g CO₂ (C-sequestering), with a further 12% reduction when AF is produced on wastewater. Embodied energy was lowest for AF-SF blends and could be reduced by 32% when cultivating algae on wastewater. We conclude that (i) microalgae offer a sustainable alternative to synthetic N-fertiliser in spinach production and potentially other crop systems, and (ii) microalgae biofertilisers support the circular nutrient economy and several sustainable development goals.Keywords: bioeconomy, decarbonisation, energy footprint, microalgae
Procedia PDF Downloads 138500 Dietary Patterns and Adherence to the Mediterranean Diet among Breast Cancer Female Patients in Lebanon: A Cross-Sectional Study
Authors: Yasmine Aridi, Lara Nasreddine, Maya Khalil, Arafat Tfayli, Anas Mugharbel, Farah Naja
Abstract:
Breast cancer is the most commonly diagnosed cancer site among women worldwide and the second most common cause of cancer mortality. Breast cancer rates differ vastly between geographical areas, countries, and within the same country. In Lebanon, the proportion of breast cancer to all other sites of tumor is 38.2%; these rates are still lower than those observed worldwide, but remain the highest among Arab countries. Studies and evidence based reviews show a strong association between breast cancer development and prognosis and dietary habits, specifically the Mediterranean diet (MD). As such, the aim of this study is to examine dietary patterns and adherence to the MD among a sample of 182 breast cancer female patients in Beirut, Lebanon. Subjects were recruited from two major hospitals; a private medical center and a public hospital. All subjects were administered two questionnaires: socio- demographics and Mediterranean diet adherence. Five Mediterranean scores were calculated: MS, MSDPS, PMDI, PREDIMED and DDS. The mean age of the participants was 53.78 years. The overall adherence to the Mediterranean diet (MD) was low since the sample means of 3 out of the 5 calculated scores were less than the scores’ medians. Given that 4 out of the 5 Mediterranean scores significantly varied between the recruitment sites, women in the private medical center were found to adhere more to the MD. Our results also show that the majority of the sample population’s intakes are exceeding the recommendations for total and saturated fat, while meeting the requirements for fiber, EPA, DHA and Linolenic Acid. Participants in the private medical center were consuming significantly more calories, carbohydrates, fiber, sugar, Lycopene, Calcium, Iron and Folate and less fat. After conducting multivariate linear regression analyses, the following significant results were observed: positive associations between MD (CPMDI, PREDIMED) and monthly income & current state of health, while negative associations between MD (MSDPS, PREDIMED) and age & employment status. Our findings indicated a low overall adherence to the MD and identified factors associated with it; which suggests a need to address dietary habits among BC patients in Lebanon, specifically encouraging them to adhere to their traditional Mediterranean diet.Keywords: Adherence, Breast cancer, Dietary patterns, Mediterranean diet, Nutrition
Procedia PDF Downloads 422499 Toward Understanding the Glucocorticoid Receptor Network in Cancer
Authors: Swati Srivastava, Mattia Lauriola, Yuval Gilad, Adi Kimchi, Yosef Yarden
Abstract:
The glucocorticoid receptor (GR) has been proposed to play important, but incompletely understood roles in cancer. Glucocorticoids (GCs) are widely used as co-medication of various carcinomas, due to their ability to reduce the toxicity of chemotherapy. Furthermore, GR antagonism has proven to be a strategy to treat triple negative breast cancer and castration-resistant prostate cancer. These observations suggest differential GR involvement in cancer subtypes. The goal of our study has been to elaborate the current understanding of GR signaling in tumor progression and metastasis. Our study involves two cellular models, non-tumorigenic breast epithelial cells (MCF10A) and Ewing sarcoma cells (CHLA9). In our breast cell model, the results indicated that the GR agonist dexamethasone inhibits EGF-induced mammary cell migration, and this effect was blocked when cells were stimulated with a GR antagonist, namely RU486. Microarray analysis for gene expression revealed that the mechanism underlying inhibition involves dexamenthasone-mediated repression of well-known activators of EGFR signaling, alongside with enhancement of several EGFR’s negative feedback loops. Because GR mainly acts primarily through composite response elements (GREs), or via a tethering mechanism, our next aim has been to find the transcription factors (TFs) which can interact with GR in MCF10A cells.The TF-binding motif overrepresented at the promoter of dexamethasone-regulated genes was predicted by using bioinformatics. To validate the prediction, we performed high-throughput Protein Complementation Assays (PCA). For this, we utilized the Gaussia Luciferase PCA strategy, which enabled analysis of protein-protein interactions between GR and predicted TFs of mammary cells. A library comprising both nuclear receptors (estrogen receptor, mineralocorticoid receptor, GR) and TFs was fused to fragments of GLuc, namely GLuc(1)-X, X-GLuc(1), and X-GLuc(2), where GLuc(1) and GLuc(2) correspond to the N-terminal and C-terminal fragments of the luciferase gene.The resulting library was screened, in human embryonic kidney 293T (HEK293T) cells, for all possible interactions between nuclear receptors and TFs. By screening all of the combinations between TFs and nuclear receptors, we identified several positive interactions, which were strengthened in response to dexamethasone and abolished in response to RU486. Furthermore, the interactions between GR and the candidate TFs were validated by co-immunoprecipitation in MCF10A and in CHLA9 cells. Currently, the roles played by the uncovered interactions are being evaluated in various cellular processes, such as cellular proliferation, migration, and invasion. In conclusion, our assay provides an unbiased network analysis between nuclear receptors and other TFs, which can lead to important insights into transcriptional regulation by nuclear receptors in various diseases, in this case of cancer.Keywords: epidermal growth factor, glucocorticoid receptor, protein complementation assay, transcription factor
Procedia PDF Downloads 227498 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)
Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin
Abstract:
The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory
Procedia PDF Downloads 108497 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models
Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri
Abstract:
Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.Keywords: multimodal transportation, demand modeling, travel behavior, statistical models
Procedia PDF Downloads 173496 Transient Heat Transfer: Experimental Investigation near the Critical Point
Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut
Abstract:
In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators
Procedia PDF Downloads 222495 Efficacy of Different Soil-Applied Fungicides to Manage Phytophthora Root Rot of Chili (Solanum annum) in Pakistan
Authors: Kiran Nawaz, Ahmad Ali Shahid, Sehrish Iftikhar, Waheed Anwar, Muhammad Nasir Subhani
Abstract:
Chili (Solanum annum L.) attacks by many fungal pathogens, including members of Oomycetes which are responsible for root rot in different chili growing areas of the world. Oomycetes pathogens cause economic losses in different regions of the Pakistan. Most of the plant tissues, including roots, crowns, fruit, and leaves, are vulnerable to Phytophthora capsici. It is very difficult to manage the Phytophthora root rot of chili as many commercial varieties are tremendously vulnerable to P. capsici. The causal agent of the disease was isolated on corn meal agar (CMA) and identified on a morphological basis by using available taxonomic keys. The pathogen was also confirmed on the molecular basis through internal transcribed spacer region and with other molecular markers.The Blastn results showed 100% homology with already reported sequences of P. capsici in NCBI database. Most of the farmers have conventionally relied on foliar fungicide applications to control Phytophthora root rot in spite of their incomplete effectiveness. In this study, in vitro plate assay, seed soaking and foliar applications of 6 fungicides were evaluated against root rot of chili. In vitro assay revealed that significant inhibition of linear growth was obtained with Triflumizole at 7.0%, followed by Thiophanate methyl (8.9%), Etridiazole (6.0%), Propamocarb (5.9%) and 7.5% with Mefenoxam and Iprodione for P. capsici. The promising treatments of in vitro plate bioassay were evaluated in pot experiments under controlled conditions in the greenhouse. All fungicides were applied after at 6-day intervals. Results of pot experiment showed that all treatments considerably inhibited the percentage of P. capsici root rot incidence. In addition, application of seed soaking with all six fungicides combined with the foliar spray of the same components showed the significant reduction in root rot incidence. The combine treatments of all fungicides as in vitro bioassay, seed soaking followed by foliar spray is considered non-harmful control methods which have advantages and limitation. Hence, these applications proved effective and harmless for the management of soil-borne plant pathogens.Keywords: blastn, bioassay, corn meal agar(CMA), oomycetes
Procedia PDF Downloads 242494 Teachers’ Role and Principal’s Administrative Functions as Correlates of Effective Academic Performance of Public Secondary School Students in Imo State, Nigeria
Authors: Caroline Nnokwe, Iheanyi Eneremadu
Abstract:
Teachers and principals are vital and integral parts of the educational system. For educational objectives to be met, the role of teachers and the functions of the principals are not to be overlooked. However, the inability of teachers and principals to carry out their roles effectively has impacted the outcome of the students’ performance. The study, therefore, examined teachers’ roles and principal’s administrative functions as correlates of effective academic performance of public secondary school students in Imo state, Nigeria. Four research questions and two hypotheses guided the study. The study adopted a correlation research design. The sample size was 5,438 respondents via the Yaro-Yamane technique, which consists of 175 teachers, 13 principals and 5,250 students using the proportional stratified random sampling technique. The instruments for data collection were a researcher-made questionnaire titled Teachers’ Role/Principals’ Administrative Functions Questionnaire (TRPAFQ) with a Cronbach Alpha coefficient of .82 and student's internal results obtained from the school authorities. Data collected were analyzed using the Pearson product-moment correlation coefficient and simple linear regression. Research questions were answered using Pearson Product Moment Correlation statistics, while the hypotheses were tested at 0.05 level of significance using regression analysis. The findings of the study showed that the educational qualification of teachers, organizing, and planning correlated student’s academic performance to a great extent, while availability and proper use of instructional materials by teachers correlated the academic performance of students to a very high extent. The findings also revealed that there is a significant relationship between teachers’ role, principals’ administrative functions and student’s academic performance of public secondary schools in Imo State, The study recommended among others that there is the need for government, through the ministry of education, and education authorities to adequately staff their supervisory department in order to carry out proper supervision of secondary school teachers, and also provide adequate instructional materials to ensure greater academic performance among secondary school students of Imo state, Nigeria.Keywords: instructional materials, principals’ administrative functions, students’ academic performance, teacher role
Procedia PDF Downloads 86493 Advancing Entrepreneurial Knowledge Through Re-Engineering Social Studies Education
Authors: Chukwuka Justus Iwegbu, Monye Christopher Prayer
Abstract:
Propeller aircraft engines, and more generally engines with a large rotating part (turboprops, high bypass ratio turbojets, etc.) are widely used in the industry and are subject to numerous developments in order to reduce their fuel consumption. In this context, unconventional architectures such as open rotors or distributed propulsion appear, and it is necessary to consider the influence of these systems on the aircraft's stability in flight. Indeed, the tendency to lengthen the blades and wings on which these propulsion devices are fixed increases their flexibility and accentuates the risk of whirl flutter. This phenomenon of aeroelastic instability is due to the precession movement of the axis of rotation of the propeller, which changes the angle of attack of the flow on the blades and creates unsteady aerodynamic forces and moments that can amplify the motion and make it unstable. The whirl flutter instability can ultimately lead to the destruction of the engine. We note the existence of a critical speed of the incident flow. If the flow velocity is lower than this value, the motion is damped and the system is stable, whereas beyond this value, the flow provides energy to the system (negative damping) and the motion becomes unstable. A simple model of whirl flutter is based on the work of Houbolt & Reed who proposed an analytical expression of the aerodynamic load on a rigid blade propeller whose axis orientation suffers small perturbations. Their work considered a propeller subjected to pitch and yaw movements, a flow undisturbed by the blades and a propeller not generating any thrust in the absence of precession. The unsteady aerodynamic forces were then obtained using the thin airfoil theory and the strip theory. In the present study, the unsteady aerodynamic loads are expressed for a general movement of the propeller (not only pitch and yaw). The acceleration and rotation of the flow by the propeller are modeled using a Blade Element Momentum Theory (BEMT) approach, which also enable to take into account the thrust generated by the blades. It appears that the thrust has a stabilizing effect. The aerodynamic model is further developed using Theodorsen theory. A reduced order model of the aerodynamic load is finally constructed in order to perform linear stability analysis.Keywords: advancing, entrepreneurial, knowledge, industralization
Procedia PDF Downloads 96492 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods
Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke
Abstract:
Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing
Procedia PDF Downloads 129491 Green-Synthesized β-Cyclodextrin Membranes for Humidity Sensors
Authors: Zeineb Baatout, Safa Teka, Nejmeddine Jaballah, Nawfel Sakly, Xiaonan Sun, Mustapha Majdoub
Abstract:
Currently, the economic interests linked to the development of bio-based materials make biomass one of the most interesting areas for science development. We are interested in the β-cyclodextrin (β-CD), one of the popular bio-sourced macromolecule, produced from the starch via enzymatic conversion. It is a cyclic oligosaccharide formed by the association of seven glucose units. It presents a rigid conical and amphiphilic structure with hydrophilic exterior, allowing it to be water-soluble. It has also a hydrophobic interior enabling the formation of inclusion complexes, which support its application for the elaboration of electrochemical and optical sensors. Nevertheless, the solubility of β-CD in water makes its use as sensitive layer limit and difficult due to their instability in aqueous media. To overcome this limitation, we chose to precede by modification of the hydroxyl groups to obtain hydrophobic derivatives which lead to water-stable sensing layers. Hence, a series of benzylated β-CDs were synthesized in basic aqueous media in one pot. This work reports the synthesis of a new family of substituted amphiphilic β-CDs using a green methodology. The obtained β-CDs showed different degree of substitution (DS) between 0.85 and 2.03. These organic macromolecular materials were soluble in common organic volatile solvents, and their structures were investigated by NMR, FT-IR and MALDI-TOF spectroscopies. Thermal analysis showed a correlation between the thermal properties of these derivatives and the benzylation degree. The surface properties of the thin films based on the benzylated β-CDs were characterized by contact angle measurements and atomic force microscopy (AFM). These organic materials were investigated as sensitive layers, deposited on quartz crystal microbalance (QCM) gravimetric transducer, for humidity sensor at room temperature. The results showed that the performances of the prepared sensors are greatly influenced by the benzylation degree of β-CD. The partially modified β-CD (DS=1) shows linear response with best sensitivity, good reproducibility, low hysteresis, fast response time (15s) and recovery time (17s) at higher relative humidity levels (RH) between 11% and 98% in room temperature.Keywords: β-cyclodextrin, green synthesis, humidity sensor, quartz crystal microbalance
Procedia PDF Downloads 272490 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 72489 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer
Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD
Procedia PDF Downloads 507488 Frustration Measure for Dipolar Spin Ice and Spin Glass
Authors: Konstantin Nefedev, Petr Andriushchenko
Abstract:
Usually under the frustrated magnetics, it understands such materials, in which ones the interaction between located magnetic moments or spins has competing character, and can not to be satisfied simultaneously. The most well-known and simplest example of the frustrated system is antiferromagnetic Ising model on the triangle. Physically, the existence of frustrations means, that one cannot select all three pairs of spins anti-parallel in the basic unit of the triangle. In physics of the interacting particle systems, the vector models are used, which are constructed on the base of the pair-interaction law. Each pair interaction energy between one-component vectors can take two opposite in sign values, excluding the case of zero. Mathematically, the existence of frustrations in system means that it is impossible to have all negative energies of pair interactions in the Hamiltonian even in the ground state (lowest energy). In fact, the frustration is the excitation, which leaves in system, when thermodynamics does not work, i.e. at the temperature absolute zero. The origin of the frustration is the presence at least of one ''unsatisfied'' pair of interacted spins (magnetic moments). The minimal relative quantity of these excitations (relative quantity of frustrations in ground state) can be used as parameter of frustration. If the energy of the ground state is Egs, and summary energy of all energy of pair interactions taken with a positive sign is Emax, that proposed frustration parameter pf takes values from the interval [0,1] and it is defined as pf=(Egs+Emax)/2Emax. For antiferromagnetic Ising model on the triangle pf=1/3. We calculated the parameters of frustration in thermodynamic limit for different 2D periodical structures of Ising dipoles, which were on the ribs of the lattice and interact by means of the long-range dipolar interaction. For the honeycomb lattice pf=0.3415, triangular - pf=0.2468, kagome - pf=0.1644. All dependencies of frustration parameter from 1/N obey to the linear law. The given frustration parameter allows to consider the thermodynamics of all magnetic systems from united point of view and to compare the different lattice systems of interacting particle in the frame of vector models. This parameter can be the fundamental characteristic of frustrated systems. It has no dependence from temperature and thermodynamic states, in which ones the system can be found, such as spin ice, spin glass, spin liquid or even spin snow. It shows us the minimal relative quantity of excitations, which ones can exist in system at T=0.Keywords: frustrations, parameter of order, statistical physics, magnetism
Procedia PDF Downloads 169487 A One-Dimensional Model for Contraction in Burn Wounds: A Sensitivity Analysis and a Feasibility Study
Authors: Ginger Egberts, Fred Vermolen, Paul van Zuijlen
Abstract:
One of the common complications in post-burn scars is contractions. Depending on the extent of contraction and the wound dimensions, the contracture can cause a limited range-of-motion of joints. A one-dimensional morphoelastic continuum hypothesis-based model describing post-burn scar contractions is considered. The beauty of the one-dimensional model is the speed; hence it quickly yields new results and, therefore, insight. This model describes the movement of the skin and the development of the strain present. Besides these mechanical components, the model also contains chemical components that play a major role in the wound healing process. These components are fibroblasts, myofibroblasts, the so-called signaling molecules, and collagen. The dermal layer is modeled as an isotropic morphoelastic solid, and pulling forces are generated by myofibroblasts. The solution to the model equations is approximated by the finite-element method using linear basis functions. One of the major challenges in biomechanical modeling is the estimation of parameter values. Therefore, this study provides a comprehensive description of skin mechanical parameter values and a sensitivity analysis. Further, since skin mechanical properties change with aging, it is important that the model is feasible for predicting the development of contraction in burn patients of different ages, and hence this study provides a feasibility study. The variability in the solutions is caused by varying the values for some parameters simultaneously over the domain of computation, for which the results of the sensitivity analysis are used. The sensitivity analysis shows that the most sensitive parameters are the equilibrium concentration of collagen, the apoptosis rate of fibroblasts and myofibroblasts, and the secretion rate of signaling molecules. This suggests that most of the variability in the evolution of contraction in burns in patients of different ages might be caused mostly by the decreasing equilibrium of collagen concentration. As expected, the feasibility study shows this model can be used to show distinct extents of contractions in burns in patients of different ages. Nevertheless, contraction formation in children differs from contraction formation in adults because of the growth. This factor has not been incorporated in the model yet, and therefore the feasibility results for children differ from what is seen in the clinic.Keywords: biomechanics, burns, feasibility, fibroblasts, morphoelasticity, sensitivity analysis, skin mechanics, wound contraction
Procedia PDF Downloads 160486 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 77485 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model
Procedia PDF Downloads 280484 Bionaut™: A Minimally Invasive Microsurgical Platform to Treat Non-Communicating Hydrocephalus in Dandy-Walker Malformation
Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher
Abstract:
The Dandy-Walker malformation (DWM) represents a clinical syndrome manifesting as a combination of posterior fossa cyst, hypoplasia of the cerebellar vermis, and obstructive hydrocephalus. Anatomic hallmarks include hypoplasia of the cerebellar vermis, enlargement of the posterior fossa, and cystic dilatation of the fourth ventricle. Current treatments of DWM, including shunting of the cerebral spinal fluid ventricular system and endoscopic third ventriculostomy (ETV), are frequently clinically insufficient, require additional surgical interventions, and carry risks of infections and neurological deficits. Bionaut Labs develops an alternative way to treat Dandy-Walker Malformation (DWM) associated with non-communicating hydrocephalus. We utilize our discreet microsurgical Bionaut™ particles that are controlled externally and remotely to perform safe, accurate, effective fenestration of the Dandy-Walker cyst, specifically in the posterior fossa of the brain, to directly normalize intracranial pressure. Bionaut™ allows for complex non-linear trajectories not feasible by any conventional surgical techniques. The microsurgical particle safely reaches targets in the lower occipital section of the brain. Bionaut™ offers a minimally invasive surgical alternative to highly involved posterior craniotomy or shunts via direct fenestration of the fourth ventricular cyst at the locus defined by the individual anatomy. Our approach offers significant advantages over the current standards of care in patients exhibiting anatomical challenge(s) as a manifestation of DWM, and therefore, is intended to replace conventional therapeutic strategies. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.Keywords: Bionaut™, cerebral spinal fluid, CSF, cyst, Dandy-Walker, fenestration, hydrocephalus, micro-robot
Procedia PDF Downloads 221483 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques
Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh
Abstract:
In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network
Procedia PDF Downloads 71482 Management of Permits and Regulatory Compliance Obligations for the East African Crude Oil Pipeline Project
Authors: Ezra Kavana
Abstract:
This article analyses the role those East African countries play in enforcing crude oil pipeline regulations. The paper finds that countries are more likely to have responsibility for enforcing these regulations if they have larger networks of gathering and transmission lines and if their citizens are more liberal and more pro-environment., Pipeline operations, transportation costs, new pipeline construction, and environmental effects are all heavily controlled. All facets of pipeline systems and the facilities connected to them are governed by statutory bodies. In order to support the project manager on such new pipeline projects, companies building and running these pipelines typically include personnel and consultants who specialize in these permitting processes. The primary permissions that can be necessary for pipelines carrying different commodities are mentioned in this paper. National, regional, and local municipalities each have their own permits. Through their right-of-way group, the contractor's project compliance leadership is typically directly responsible for obtaining those permits, which are typically obtained through government agencies. The whole list of local permits needed for a planned pipeline can only be found after a careful field investigation. A country's government regulates pipelines that are entirely within its borders. With a few exceptions, state regulations governing ratemaking and safety have been enacted to be consistent with regulatory requirements. Countries that produce a lot of energy are typically more involved in regulating pipelines than countries that produce little to no energy. To identify the proper regulatory authority, it is important to research the several government agencies that regulate pipeline transportation. Additionally, it's crucial that the scope determination of a planned project engage with a various external professional with experience in linear facilities or the company's pipeline construction and environmental professional to identify and obtain any necessary design clearances, permits, or approvals. These professionals can offer precise estimations of the costs and length of time needed to process necessary permits. Governments with a stronger energy sector, on the other hand, are less likely to take on control. However, the performance of the pipeline or national enforcement activities are unaffected significantly by whether a government has taken on control. Financial fines are the most efficient government enforcement instrument because they greatly reduce occurrences and property damage.Keywords: crude oil, pipeline, regulatory compliance, and construction permits
Procedia PDF Downloads 96481 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 221480 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model
Authors: Al. I. Diwedar, Moheb Iskander, Mohamed Yossef, Ahmed ElKut, Noha Fouad, Radwa Fathy, Mustafa M. Almaghraby, Amira Samir, Ahmed Romya, Nourhan Hassan, Asmaa Abo Zed, Bas Reijmerink, Julien Groenenboom
Abstract:
Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that it is accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. This enables decision-makers to make informed choices and develop effective strategies for sustainable development and risk mitigation of the coastal zone. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model
Procedia PDF Downloads 29479 Television, Internet, and Internet Social Media Direct-To-Consumer Prescription Medication Advertisements: Intention and Behavior to Seek Additional Prescription Medication Information
Authors: Joshua Fogel, Rivka Herzog
Abstract:
Although direct-to-consumer prescription medication advertisements (DTCA) are viewed or heard in many venues, there does not appear to be any research for internet social media DTCA. We study the association of traditional media DTCA and digital media DTCA including internet social media of YouTube, Facebook, and Twitter with three different outcomes. There was one intentions outcome and two different behavior outcomes. The intentions outcome was the agreement level for seeking additional information about a prescription medication after seeing a DTCA. One behavior outcome was the agreement level for obtaining additional information about a prescription medication after seeing a DTCA. The other behavior outcome was the frequency level for obtaining additional information about a prescription medication after seeing a DTCA. Surveys were completed by 635 college students. Predictors included demographic variables, theory of planned behavior variables, health variables, and advertisements seen or heard. Also, in the behavior analyses, additional predictors of intentions and sources for seeking additional prescription drug information were included. Multivariate linear regression analyses were conducted. We found that increased age was associated with increased behavior, women were associated with increased intentions, and Hispanic race/ethnicity was associated with decreased behavior. For the theory of planned behavior variables, increased attitudes were associated with increased intentions, increased social norms were associated with increased intentions and behavior, and increased intentions were associated with increased behavior. Very good perceived health was associated with increased intentions. Advertisements seen in spam mail were associated with decreased intentions. Advertisements seen on traditional or cable television were associated with decreased behavior. Advertisements seen on television watched on the internet were associated with increased behavior. The source of seeking additional information of reading internet print content was associated with increased behavior. No internet social media advertisements were associated with either intentions or behavior. In conclusion, pharmaceutical brand managers and marketers should consider these findings when tailoring their DTCA advertising campaigns and directing their DTCA advertising budget towards young adults such as college students. They need to reconsider the current approach for traditional television DTCA and also consider dedicating a larger advertising budget toward internet television DTCA. Although internet social media is a popular place to advertise, the financial expenditures do not appear worthwhile for DTCA when targeting young adults such as college students.Keywords: brand managers, direct-to-consumer advertising, internet, social media
Procedia PDF Downloads 265478 An Experimental Study of Scalar Implicature Processing in Chinese
Authors: Liu Si, Wang Chunmei, Liu Huangmei
Abstract:
A prominent component of the semantic versus pragmatic debate, scalar implicature (SI) has been gaining great attention ever since it was proposed by Horn. The constant debate is between the structural and pragmatic approach. The former claims that generation of SI is costless, automatic, and dependent mostly on the structural properties of sentences, whereas the latter advocates both that such generation is largely dependent upon context, and that the process is costly. Many experiments, among which Katsos’s text comprehension experiments are influential, have been designed and conducted in order to verify their views, but the results are not conclusive. Besides, most of the experiments were conducted in English language materials. Katsos conducted one off-line and three on-line text comprehension experiments, in which the previous shortcomings were addressed on a certain extent and the conclusion was in favor of the pragmatic approach. We intend to test the results of Katsos’s experiment in Chinese scalar implicature. Four experiments in both off-line and on-line conditions to examine the generation and response time of SI in Chinese "yixie" (some) and "quanbu (dou)" (all) will be conducted in order to find out whether the structural or the pragmatic approach could be sustained. The study mainly aims to answer the following questions: (1) Can SI be generated in the upper- and lower-bound contexts as Katsos confirmed when Chinese language materials are used in the experiment? (2) Can SI be first generated, then cancelled as default view claimed or can it not be generated in a neutral context when Chinese language materials are used in the experiment? (3) Is SI generation costless or costly in terms of processing resources? (4) In line with the SI generation process, what conclusion can be made about the cognitive processing model of language meaning? Is it a parallel model or a linear model? Or is it a dynamic and hierarchical model? According to previous theoretical debates and experimental conflicts, presumptions could be made that SI, in Chinese language, might be generated in the upper-bound contexts. Besides, the response time might be faster in upper-bound than that found in lower-bound context. SI generation in neutral context might be the slowest. At last, a conclusion would be made that the processing model of SI could not be verified by either absolute structural or pragmatic approaches. It is, rather, a dynamic and complex processing mechanism, in which the interaction of language forms, ad hoc context, mental context, background knowledge, speakers’ interaction, etc. are involved.Keywords: cognitive linguistics, pragmatics, scalar implicture, experimental study, Chinese language
Procedia PDF Downloads 361477 Gut Microbiota in Patients with Opioid Use Disorder: A 12-week Follow up Study
Authors: Sheng-Yu Lee
Abstract:
Aim: Opioid use disorder is often characterized by repetitive drug-seeking and drug-taking behaviors with severe public health consequences. Animal model showed that opioid-induced perturbations in the gut microbiota causally relate to neuroinflammation, deficits in reward responding, and opioid tolerance, possibly due to changes in gut microbiota. Therefore, we propose that the dysbiosis of gut microbiota can be associated with pathogenesis of opioid dependence. In this current study, we explored the differences in gut microbiota between patients and normal controls and in patients before and after initiation of methadone treatment program for 12 weeks. Methods: Patients with opioid use disorder between 20 and 65 years were recruited from the methadone maintenance outpatient clinic in 2 medical centers in the Southern Taiwan. Healthy controls without any family history of major psychiatric disorders (schizophrenia, bipolar disorder and major depressive disorder) were recruited from the community. After initial screening, 15 patients with opioid use disorder joined the study for initial evaluation (Week 0), 12 of them completed the 12-week follow-up while receiving methadone treatment and ceased heroin use (Week 12). Fecal samples were collected from the patients at baseline and the end of 12th week. A one-time fecal sample was collected from the healthy controls. The microbiota of fecal samples were investigated using 16S rRNA V3V4 amplicon sequencing, followed by bioinformatics and statistical analyses. Results: We found no significant differences in species diversity in opioid dependent patients between Week 0 and Week 12, nor compared between patients at both points and controls. For beta diversity, using principal component analysis, we found no significant differences between patients at Week 0 and Week 12, however, both patient groups showed significant differences compared to control (P=0.011). Furthermore, the linear discriminant analysis effect size (LEfSe) analysis was used to identify differentially enriched bacteria between opioid use patients and healthy controls. Compared to controls, the relative abundance of Lactobacillaceae Lactobacillus (L. Lactobacillus), Megasphaera Megasphaerahexanoica (M. Megasphaerahexanoica) and Caecibacter Caecibactermassiliensis (C Caecibactermassiliensis) were increased in patients at Week 0, while Coriobacteriales Atopobiaceae (C. Atopobiaceae), Acidaminococcus Acidaminococcusintestini (A. Acidaminococcusintestini) and Tractidigestivibacter Tractidigestivibacterscatoligenes (T. Tractidigestivibacterscatoligenes) were increased in patients at Week 12. Conclusion: In conclusion, we suggest that the gut microbiome community maybe linked to opioid use disorder, such differences may not be altered even after 12-week of cessation of opioid use.Keywords: opioid use disorder, gut microbiota, methadone treatment, follow up study
Procedia PDF Downloads 106476 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment
Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali
Abstract:
Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells
Procedia PDF Downloads 94475 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 57474 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 227