Search results for: Applied linguistics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8609

Search results for: Applied linguistics

1289 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.

Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model

Procedia PDF Downloads 280
1288 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 212
1287 The Situation in Afghanistan as a Step Forward in Putting an End to Impunity

Authors: Jelena Radmanovic

Abstract:

On 5 March 2020, the International Criminal Court has decided to authorize the investigation into the crimes allegedly committed on the territory of Afghanistan after 1 May 2003. The said determination has raised several controversies, including the recently imposed sanctions by the United States, furthering the United States' long-standing rejection of the authority of the International Criminal Court. The purpose of this research is to address the said investigation in light of its importance for the prevention of impunity in the cases where the perpetrators are nationals of Non-Party States to the Rome Statute. Difficulties that the International Criminal Court has been facing, concerning the establishment of its jurisdiction in those instances where an involved state is not a Party to the Rome Statute, have become the most significant stumbling block undermining the importance, integrity, and influence of the Court. The Situation in Afghanistan raises even further concern, bearing in mind that the Prosecutor’s Request for authorization of an investigation pursuant to article 15 from 20 November 2017 has initially been rejected with the ‘interests of justice’ as an applied rationale. The first method used for the present research is the description of the actual events regarding the aforementioned decisions and the following reactions in the international community, while with the second method – the method of conceptual analysis, the research will address the decisions pertaining to the International Criminal Court’s jurisdiction and will attempt to address the mentioned Decision of 5 March 2020 as an example of good practice and a precedent that should be followed in all similar situations. The research will attempt parsing the reasons used by the International Criminal Court, giving rather greater attention to the latter decision that has authorized the investigation and the points raised by the officials of the United States. It is a find of this research that the International Criminal Court, together with other similar judicial instances (Nuremberg and Tokyo Tribunals, The International Criminal Tribunal for the former Yugoslavia, The International Criminal Tribunal for Rwanda), has presented the world with the possibility of non-impunity, attempting to prosecute those responsible for the gravest of crimes known to the humanity and has shown that such persons should not enjoy the benefits of their immunities, with its focus primarily on the victims of such crimes. Whilst it is an issue that will most certainly be addressed further in the future, with the situations that will be brought before the International Criminal Court, the present research will make an attempt at pointing to the significance of the situation in Afghanistan, the International Criminal Court as such and the international criminal justice as a whole, for the purpose of putting an end to impunity.

Keywords: Afghanistan, impunity, international criminal court, sanctions, United States

Procedia PDF Downloads 127
1286 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors

Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro

Abstract:

Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.

Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method

Procedia PDF Downloads 247
1285 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 268
1284 Understanding the Nature of Blood Pressure as Metabolic Syndrome Component in Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Pediatric overweight and obesity need attention because they may cause morbid obesity, which may develop metabolic syndrome (MetS). Criteria used for the definition of adult MetS cannot be applied for pediatric MetS. Dynamic physiological changes that occur during childhood and adolescence require the evaluation of each parameter based upon age intervals. The aim of this study is to investigate the distribution of blood pressure (BP) values within diverse pediatric age intervals and the possible use and clinical utility of a recently introduced Diagnostic Obesity Notation Model Assessment Tension (DONMA tense) Index derived from systolic BP (SBP) and diastolic BP (DBP) [SBP+DBP/200]. Such a formula may enable a more integrative picture for the assessment of pediatric obesity and MetS due to the use of both SBP and DBP. 554 children, whose ages were between 6-16 years participated in the study; the study population was divided into two groups based upon their ages. The first group comprises 280 cases aged 6-10 years (72-120 months), while those aged 10-16 years (121-192 months) constituted the second group. The values of SBP, DBP and the formula (SBP+DBP/200) covering both were evaluated. Each group was divided into seven subgroups with varying degrees of obesity and MetS criteria. Two clinical definitions of MetS have been described. These groups were MetS3 (children with three major components), and MetS2 (children with two major components). The other groups were morbid obese (MO), obese (OB), overweight (OW), normal (N) and underweight (UW). The children were included into the groups according to the age- and sex-based body mass index (BMI) percentile values tabulated by WHO. Data were evaluated by SPSS version 16 with p < 0.05 as the statistical significance degree. Tension index was evaluated in the groups above and below 10 years of age. This index differed significantly between N and MetS as well as OW and MetS groups (p = 0.001) above 120 months. However, below 120 months, significant differences existed between MetS3 and MetS2 (p = 0.003) as well as MetS3 and MO (p = 0.001). In comparison with the SBP and DBP values, tension index values have enabled more clear-cut separation between the groups. It has been detected that the tension index was capable of discriminating MetS3 from MetS2 in the group, which was composed of children aged 6-10 years. This was not possible in the older group of children. This index was more informative for the first group. This study also confirmed that 130 mm Hg and 85 mm Hg cut-off points for SBP and DBP, respectively, are too high for serving as MetS criteria in children because the mean value for tension index was calculated as 1.00 among MetS children. This finding has shown that much lower cut-off points must be set for SBP and DBP for the diagnosis of pediatric MetS, especially for children under-10 years of age. This index may be recommended to discriminate MO, MetS2 and MetS3 among the 6-10 years of age group, whose MetS diagnosis is problematic.

Keywords: blood pressure, children, index, metabolic syndrome, obesity

Procedia PDF Downloads 117
1283 Evaluating the Ability to Cycle in Cities Using Geographic Information Systems Tools: The Case Study of Greek Modern Cities

Authors: Christos Karolemeas, Avgi Vassi, Georgia Christodoulopoulou

Abstract:

Although the past decades, planning a cycle network became an inseparable part of all transportation plans, there is still a lot of room for improvement in the way planning is made, in order to create safe and direct cycling networks that gather the parameters that positively influence one's decision to cycle. The aim of this article is to study, evaluate and visualize the bikeability of cities. This term is often used as the 'the ability of a person to bike' but this study, however, adopts the term in the sense of bikeability as 'the ability of the urban landscape to be biked'. The methodology used included assessing cities' accessibility by cycling, based on international literature and corresponding walkability methods and the creation of a 'bikeability index'. Initially, a literature review was made to identify the factors that positively affect the use of bicycle infrastructure. Those factors were used in order to create the spatial index and quantitatively compare the city network. Finally, the bikeability index was applied in two case studies: two Greek municipalities that, although, they have similarities in terms of land uses, population density and traffic congestion, they are totally different in terms of geomorphology. The factors suggested by international literature were (a) safety, (b) directness, (c) comfort and (d) the quality of the urban environment. Those factors were quantified through the following parameters: slope, junction density, traffic density, traffic speed, natural environment, built environment, activities coverage, centrality and accessibility to public transport stations. Each road section was graded for the above-mentioned parameters, and the overall grade shows the level of bicycle accessibility (low, medium, high). Each parameter, as well as the overall accessibility levels, were analyzed and visualized through Geographic Information Systems. This paper presents the bikeability index, its' results, the problems that have arisen and the conclusions from its' implementation through Strengths-Weaknesses-Opportunities-Threats analysis. The purpose of this index is to make it easy for researchers, practitioners, politicians, and stakeholders to quantify, visualize and understand which parts of the urban fabric are suitable for cycling.

Keywords: accessibility, cycling, green spaces, spatial data, urban environment

Procedia PDF Downloads 110
1282 Eco-Friendly Silicone/Graphene-Based Nanocomposites as Superhydrophobic Antifouling Coatings

Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Hekmat R. Madian, Sherif A. El-Safty, Mohamed A. Shenashen

Abstract:

After the 2003 prohibition on employing TBT-based antifouling coatings, polysiloxane antifouling nano-coatings have gained in popularity as environmentally friendly and cost-effective replacements. A series of non-toxic polydimethylsiloxane nanocomposites filled with nanosheets of graphene oxide (GO) decorated with magnetite nanospheres (GO-Fe₃O₄ nanospheres) were developed and cured via a catalytic hydrosilation method. Various GO-Fe₃O₄ hybrid concentrations were mixed with the silicone resin via solution casting technique to evaluate the structure–property connection. To generate GO nanosheets, a modified Hummers method was applied. A simple co-precipitation method was used to make spherical magnetite particles under inert nitrogen. Hybrid GO-Fe₃O₄ composite fillers were developed by a simple ultrasonication method. Superhydrophobic PDMS/GO-Fe₃O₄ nanocomposite surface with a micro/nano-roughness, reduced surface-free energy (SFE), high fouling release (FR) efficiency was achieved. The physical, mechanical, and anticorrosive features of the virgin and GO-Fe₃O₄ filled nanocomposites were investigated. The synergistic effects of GO-Fe₃O4 hybrid's well-dispersion on the water-repellency and surface topological roughness of the PDMS/GO-Fe₃O₄ nanopaints were extensively studied. The addition of the GO-Fe₃O₄ hybrid fillers till 1 wt.% could increase the coating's water contact angle (158°±2°), minimize its SFE to 12.06 mN/m, develop outstanding micro/nano-roughness, and improve its bulk mechanical and anticorrosion properties. Several microorganisms were employed for examining the fouling-resistance of the coated specimens for 1 month. Silicone coatings filled with 1 wt.% GO-Fe₃O₄ nanofiller showed the least biodegradability% among all the tested microorganisms. Whereas GO-Fe₃O4 with 5 wt.% nanofiller possessed the highest biodegradability% potency by all the microorganisms. We successfully developed non-toxic and low cost nanostructured FR composite coating with high antifouling-resistance, reproducible superhydrophobic character, and enhanced service-time for maritime navigation.

Keywords: silicone antifouling, environmentally friendly, nanocomposites, nanofillers, fouling repellency, hydrophobicity

Procedia PDF Downloads 114
1281 The Structure of Financial Regulation: The Regulators Perspective

Authors: Mohamed Aljarallah, Mohamed Nurullah, George Saridakis

Abstract:

This paper aims and objectives are to investigate how the structural change of the financial regulatory bodies affect the financial supervision and how the regulators can design such a structure with taking into account; the Central Bank, the conduct of business and the prudential regulators, it will also consider looking at the structure of the international regulatory bodies and what barriers are found. There will be five questions to be answered; should conduct of business and prudential regulation be separated? Should the financial supervision and financial stability be separated? Should the financial supervision be under the Central Bank? To what extent the politician should intervene in changing the regulatory and supervisory structure? What should be the regulatory and supervisory structure when there is financial conglomerate? Semi structure interview design will be applied. This research sample selection contains a collective of financial regulators and supervisors from the emerged and emerging countries. Moreover, financial regulators and supervisors must be at a senior level at their organisations. Additionally, senior financial regulators and supervisors would come from different authorities and from around the world. For instance, one of the participants comes from the International Bank Settlements, others come from European Central Bank, and an additional one will come from Hong Kong Monetary Authority and others. Such a variety aims to fulfil the aims and objectives of the research and cover the research questions. The analysis process starts with transcription of the interview, using Nvivo software for coding, applying thematic interview to generate the main themes. The major findings of the study are as follow. First, organisational structure changes quite frequently if the mandates are not clear. Second, measuring structural change is difficult, which makes the whole process unclear. Third, effective coordination and communication are what regulators looking for when they change the structure and that requires; openness, trust, and incentive. In addition to that, issues appear during the event of crisis tend to be the reason why the structure change. Also, the development of the market sometime causes a change in the regulatory structure. And, some structural change occurs simply because of the international trend, fashion, or other countries' experiences. Furthermore, when the top management change the structure tends to change. Moreover, the structure change due to the political change, or politicians try to show they are doing something. Finally, fear of being blamed can be a driver of structural change. In conclusion, this research aims to provide an insight from the senior regulators and supervisors from fifty different countries to have a clear understanding of why the regulatory structure keeps changing from time to time through a qualitative approach, namely, semi-structure interview.

Keywords: financial regulation bodies, financial regulatory structure, global financial regulation, financial crisis

Procedia PDF Downloads 144
1280 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 94
1279 Obtainment of Systems with Efavirenz and Lamellar Double Hydroxide as an Alternative for Solubility Improvement of the Drug

Authors: Danilo A. F. Fontes, Magaly A. M.Lyra, Maria L. C. Moura, Leslie R. M. Ferraz, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim, Giovanna C. R. M. Schver, Ping I. Lee, Severino Alves-Júnior, José L. Soares-Sobrinho, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a first-choice drug in antiretroviral therapy with high efficacy in the treatment of infection by Human Immunodeficiency Virus, which causes Acquired Immune Deficiency Syndrome (AIDS). EFV has low solubility in water resulting in a decrease in the dissolution rate and, consequently, in its bioavailability. Among the technological alternatives to increase solubility, the Lamellar Double Hydroxides (LDH) have been applied in the development of systems with poorly water-soluble drugs. The use of analytical techniques such as X-Ray Diffraction (XRD), Infrared Spectroscopy (IR) and Differential Scanning Calorimetry (DSC) allowed the elucidation of drug interaction with the lamellar compounds. The objective of this work was to characterize and develop the binary systems with EFV and LDH in order to increase the solubility of the drug. The LDH-CaAl was synthesized by the method of co-precipitation from salt solutions of calcium nitrate and aluminum nitrate in basic medium. The systems EFV-LDH and their physical mixtures (PM) were obtained at different concentrations (5-60% of EFV) using the solvent technique described by Takahashi & Yamaguchi (1991). The characterization of the systems and the PM’s was performed by XRD techniques, IR, DSC and dissolution test under non-sink conditions. The results showed improvements in the solubility of EFV when associated with LDH, due to a possible change in its crystal structure and formation of an amorphous material. From the DSC results, one could see that the endothermic peak at 173°C, temperature that correspond to the melting process of EFZ in the crystal form, was present in the PM results. For the EFZ-LDH systems (with 5, 10 and 30% of drug loading), this peak was not observed. XRD profiles of the PM showed well-defined peaks for EFV. Analyzing the XRD patterns of the systems, it was found that the XRD profiles of all the systems showed complete attenuation of the characteristic peaks of the crystalline form of EFZ. The IR technique showed that, in the results of the PM, there was the appearance of one band and overlap of other bands, while the IR results of the systems with 5, 10 and 30% drug loading showed the disappearance of bands and a few others with reduced intensity. The dissolution test under non-sink conditions showed that systems with 5, 10 and 30% drug loading promoted a great increase in the solubility of EFV, but the system with 10% of drug loading was the only one that could keep substantial amount of drug in solution at different pHs.

Keywords: Efavirenz, Lamellar Double Hydroxides, Pharmaceutical Techonology, Solubility

Procedia PDF Downloads 583
1278 Effects of Roasting as Preservative Method on Food Value of the Runner Groundnuts, Arachis hypogaea

Authors: M. Y. Maila, H. P. Makhubele

Abstract:

Roasting is one of the oldest preservation method used in foods such as nuts and seeds. It is a process by which heat is applied to dry foodstuffs without the use of oil or water as a carrier. Groundnut seeds, also known as peanuts when sun dried or roasted, are among the oldest oil crops that are mostly consumed as a snack, after roasting in many parts of South Africa. However, roasting can denature proteins, destroy amino acids, decrease nutritive value and induce undesirable chemical changes in the final product. The aim of this study, therefore, was to evaluate the effect of various roasting times on the food value of the runner groundnut seeds. A constant temperature of 160 °C and various time-intervals (20, 30, 40, 50 and 60 min) were used for roasting groundnut seeds in an oven. Roasted groundnut seeds were then cooled and milled to flour. The milled sundried, raw groundnuts served as reference. The proximate analysis (moisture, energy and crude fats) was performed and the results were determined using standard methods. The antioxidant content was determined using HPLC. Mineral (cobalt, chromium, silicon and iron) contents were determined by first digesting the ash of sundried and roasted seed samples in 3M Hydrochloric acid and then determined by Atomic Absorption Spectrometry. All results were subjected to ANOVA through SAS software. Relative to the reference, roasting time significantly (p ≤ 0.05) reduced moisture (71%–88%), energy (74%) and crude fat (5%–64%) of the runner groundnut seeds, whereas the antioxidant content was significantly (p ≤ 0.05) increased (35%–72%) with increasing roasting time. Similarly, the tested mineral contents of the roasted runner groundnut seeds were also significantly (p ≤ 0.05) reduced at all roasting times: cobalt (21%–83%), chromium (48%–106%) and silicon (58%–77%). However, the iron content was significantly (p ≤ 0.05) unaffected. Generally, the tested runner groundnut seeds had higher food value in the raw state than in the roasted state, except for the antioxidant content. Moisture is a critical factor affecting the shelf life, texture and flavor of the final product. Loss of moisture ensures prolonged shelf life, which contribute to the stability of the roasted peanuts. Also, increased antioxidant content in roasted groundnuts is essential in other health-promoting compounds. In conclusion, the overall reduction in the proximate and mineral contents of the runner groundnuts seeds due to roasting is sufficient to suggest influences of roasting time on the food value of the final product and shelf life.

Keywords: dry roasting, legume, oil source, peanuts

Procedia PDF Downloads 287
1277 Controlling Shape and Position of Silicon Micro-nanorolls Fabricated using Fine Bubbles during Anodization

Authors: Yodai Ashikubo, Toshiaki Suzuki, Satoshi Kouya, Mitsuya Motohashi

Abstract:

Functional microstructures such as wires, fins, needles, and rolls are currently being applied to variety of high-performance devices. Under these conditions, a roll structure (silicon micro-nanoroll) was formed on the surface of the silicon substrate via fine bubbles during anodization using an extremely diluted hydrofluoric acid (HF + H₂O). The as-formed roll had a microscale length and width of approximately 1 µm. The number of rolls was 3-10 times and the thickness of the film forming the rolls was about 10 nm. Thus, it is promising for applications as a distinct device material. These rolls functioned as capsules and/or pipelines. To date, number of rolls and roll length have been controlled by anodization conditions. In general, controlling the position and roll winding state is required for device applications. However, it has not been discussed. Grooves formed on silicon surface before anodization might be useful control the bubbles. In this study, we investigated the effect of the grooves on the position and shape of the roll. The surfaces of the silicon wafers were anodized. The starting material was p-type (100) single-crystalline silicon wafers. The resistivity of the wafer is 5-20 ∙ cm. Grooves were formed on the surface of the substrate before anodization using sandpaper and diamond pen. The average width and depth of the grooves were approximately 1 µm and 0.1 µm, respectively. The HF concentration {HF/ (HF + C₂H5OH + H₂O)} was 0.001 % by volume. The C2H5OH concentration {C₂H5OH/ (HF + C₂H5OH + H₂O)} was 70 %. A vertical single-tank cell and Pt cathode were used for anodization. The silicon roll was observed by field-emission scanning electron microscopy (FE-SEM; JSM-7100, JEOL). The atomic bonding state of the rolls was evaluated using X-ray photoelectron spectroscopy (XPS; ESCA-3400, Shimadzu). For straight groove, the rolls were formed along the groove. This indicates that the orientation of the rolls can be controlled by the grooves. For lattice-like groove, the rolls formed inside the lattice and along the long sides. In other words, the aspect ratio of the lattice is very important for the roll formation. In addition, many rolls were formed and winding states were not uniform when the lattice size is too large. On the other hand, no rolls were formed for small lattice. These results indicate that there is the optimal size of lattice for roll formation. In the future, we are planning on formation of rolls using groove formed by lithography technique instead of sandpaper and the pen. Furthermore, the rolls included nanoparticles will be formed for nanodevices.

Keywords: silicon roll, anodization, fine bubble, microstructure

Procedia PDF Downloads 18
1276 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study

Authors: Ndibarefinia Tobin

Abstract:

The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.

Keywords: knowledge management, whole life costing, construction industry, knowledge

Procedia PDF Downloads 244
1275 Placebo Analgesia in Older Age: Evidence from Event-Related Potentials

Authors: Angelika Dierolf, K. Rischer, A. Gonzalez-Roldan, P. Montoya, F. Anton, M. Van der Meulen

Abstract:

Placebo analgesia is a powerful cognitive endogenous pain modulation mechanism with high relevance in pain treatment. Older people would benefit, especially from non-pharmacologic pain interventions, since this age group is disproportionately affected by acute and chronic pain, while pharmacological treatments are less suitable due to polypharmacy and age-related changes in drug metabolism. Although aging is known to affect neurobiological and physiological aspects of pain perception, as for example, changes in pain threshold and pain tolerance, its effects on cognitive pain modulation strategies, including placebo analgesia, have hardly been investigated so far. In the present study, we are assessing placebo analgesia in 35 older adults (60 years and older) and 35 younger adults (between 18 and 35 years). Acute pain was induced with short transdermal electrical pulses to the inner forearm, using a concentric stimulating electrode. Stimulation intensities were individually adjusted to the participant’s threshold. Next to the stimulation site, we applied sham transcutaneous electrical nerve stimulation (TENS). Participants were informed that sometimes the TENS device would be switched on (placebo condition), and sometimes it would be switched off (control condition). In reality, it was always switched off. Participants received alternating blocks of painful stimuli in the placebo and control condition and were asked to rate the intensity and unpleasantness of each stimulus on a visual analog scale (VAS). Pain-related evoked potentials were recorded with a 64-channel EEG. Preliminary results show a reduced placebo effect in older compared to younger adults in both behavioral and neurophysiological data. Older people experienced less subjective pain reduction under sham TENS treatment compared to younger adults, as evidenced by the VAS ratings. The N1 and P2 event-related potential components were generally reduced in the older group. While younger adults showed a reduced N1 and P2 under sham TENS treatment, this reduction was considerably smaller in older people. This reduced placebo effect in the older group suggests that cognitive pain modulation is altered in aging and may at least partly explain why older adults experience more pain. Our results highlight the need for a better understanding of the efficacy of non-pharmacological pain treatments in older adults and how these can be optimized to meet the specific requirements of this population.

Keywords: placebo analgesia, aging, acute pain, TENS, EEG

Procedia PDF Downloads 141
1274 Cytokine Profiling in Cultured Endometrial Cells after Hormonal Treatment

Authors: Mark Gavriel, Ariel J. Jaffa, Dan Grisaru, David Elad

Abstract:

The human endometrium-myometrium interface (EMI) is the uterine inner barrier without a separatig layer. It is composed of endometrial epithelial cells (EEC) and endometrial stromal cells (ESC) in the endometrium and myometrial smooth muscle cells (MSMC) in the myometrium. The EMI undergoes structural remodeling during the menstruation cycle which are essential for human reproduction. Recently, we co-cultured a layer-by-layer in vitro model of EEC, ESC and MSMC on a synthetic membrane for mechanobiology experiments. We also treated the model with progesterone and β-estradiol in order to mimic the in vivo receptive uterus In the present study we analyzed the cytokines profile in a single layer of EEC the hormonal treated in vitro model of the EMI. The methodologies of this research include simple tissue-engineering . First, we cultured commercial EEC (RL95-2, ATCC® CRL-1671™) in 24-wellplate. Then, we applied an hormonal stimuli protocol with 17-β-estradiol and progesterone in time dependent concentration according to the human physiology that mimics the menstrual cycle. We collected cell supernatant samples of control, pre-ovulation, ovulation and post-ovulaton periods for analysis of the secreted proteins and cytokines. The cytokine profiling was performed using the Proteome Profiler Human XL Cytokine Array Kit (R&D Systems, Inc., USA) that can detect105 human soluble cytokines. The relative quantification of all the cytokines will be analyzed using xMAP – LUMINEX. We conducted a fishing expedition with the 4 membranes Proteome Profiler. We processed the images, quantified the spots intensity and normalized these values by the negative control and reference spots at the membrane. Analyses of the relative quantities that reflected change higher than 5% of the control points of the kit revealed the The results clearly showed that there are significant changes in the cytokine level for inflammation and angiogenesis pathways. Analysis of tissue-engineered models of the uterine wall will enable deeper investigation of molecular and biomechanical aspects of early reproductive stages (e.g. the window of implantation) or developments of pathologies.

Keywords: tissue-engineering, hormonal stimuli, reproduction, multi-layer uterine model, progesterone, β-estradiol, receptive uterine model, fertility

Procedia PDF Downloads 132
1273 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 164
1272 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 464
1271 The Removal of Common Used Pesticides from Wastewater Using Golden Activated Charcoal

Authors: Saad Mohamed Elsaid Onaizah

Abstract:

One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use trated activated charcoal with gold nitrate solution; For the purpose of removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption forming complex with the gold metal immobilised on activated carbon surfaces. Also, gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.

Keywords: waste water, pesticides pollution, adsorption, activated carbon

Procedia PDF Downloads 79
1270 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 271
1269 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
1268 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine

Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.

Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence

Procedia PDF Downloads 212
1267 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 371
1266 Persistence of Ready Mix (Chlorpyriphos 50% + Cypermethrin 5%), Cypermethrin and Chlorpyriphos in Soil under Okra Fruits

Authors: Samriti Wadhwa, Beena Kumari

Abstract:

Background and Significance: Residue levels of ready mix (chlorpyriphos 50% and cypermethrin 5%), cypermethrin and chlorpyriphos individually in sandy loam soil under okra fruits (Variety, Varsha Uphar) were determined; a field experiment was conducted at Research Farm of Department of Entomology of Chaudhary Charan Singh Haryana Agriculture University, Hisar, Haryana, India. Persistence behavior of cypermethrin and chlorpyriphos was studied following application of a pre-mix formulation of insecticides viz. Action-505EC, chlorpyriphos (Radar 20 EC) and cypermethrin (Cyperkill 10 EC) at the recommended dose and double the recommended dose along with control at fruiting stage. Pesticide application also leads to decline in soil acarine fauna which is instrumental in the breakdown of the litter because of which minerals are released into the soil. So, by this study, one can evaluate the safety of pesticides for the soil health. Methodology: Action-505EC (chlorpyriphos 50% and cypermethrin 5%) at 275 g a .i. ha⁻¹ (single dose) and 550 g a. i. ha⁻¹ (double dose), chlorpyriphos (Radar 20 EC) at 200 g a. i. ha⁻¹ (single dose) and 400 g a. i. ha⁻¹ (double dose) and cypermethrin (Cyperkill 10 EC) at 50 g a. i. ha⁻¹ (single dose) and 100 g a. i. ha⁻¹ (double dose) were applied at the fruiting stage on okra crop. Samples of soils from okra field were collected periodically at 0 (1h after spray), 1, 3, 5, 7, 10, 15 days and at harvest after application as well of control soil sample. After air drying, adsorbing through Florisil and activated charcoal and eluting with hexane: acetone (9:1) then residues in soils were estimated by a gas chromatograph equipped with a capillary column and electron capture detector. Results: No persistence of cypermethrin in ready-mix in soil under okra fruits at single and double dose was observed. In case of chlorpyriphos in ready-mix, average initial deposits on 0 (1 h after treatment) day was 0.015 mg kg⁻¹ and 0.036 mg kg⁻¹ which persisted up to 5 days and up to 7 days for single and double dose, respectively. After that residues reached below a detectable level of 0.010 mg kg⁻¹. Experimental studies on cypermethrin individually revealed that average initial deposits on 0 (1 h after treatment) were 0.008 mg kg⁻¹ and 0.012 mg kg⁻¹ which persisted up to 3 days and 5 days for single and double dose, respectively after that residues reached to below detectable level. The initial deposits of chlorpyriphos individually in soil were found to be 0.055 mg kg⁻¹ and 0.113 mg kg⁻¹ which persisted up to 7 days and 10 days at a lower dose and higher dose, respectively after that residues reached to below determination level. Conclusion: In soil under okra crop, only individual cypermethrin in both the doses persisted whereas no persistence of cypermethrin in ready-mix was observed. Persistence of chlorpyriphos individually is more as compared to chlorpyriphos in ready-mix in both the doses. Overall, the persistence of chlorpyriphos in soil under okra crop is more than cypermethrin.

Keywords: chlorpyriphos, cypermethrin, okra, ready mix, soil

Procedia PDF Downloads 163
1265 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
1264 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 256
1263 Applying Organic Natural Fertilizer to 'Orange Rubis' and 'Farbaly' Apricot Growth, Yield and Fruit Quality

Authors: A. Tarantino, F. Lops, G. Lopriore, G. Disciglio

Abstract:

Biostimulants are known as the organic fertilizers that can be applied in agriculture in order to increase nutrient uptake, growth and development of plants and improve quality, productivity and the environmental positive impacts. The aim of this study was to test the effects of some commercial biostimulants products (Bion® 50 WG, Hendophyt ® PS, Ergostim® XL and Radicon®) on vegeto-productive behavior and qualitative characteristics of fruits of two emerging apricot cultivars (Orange Rubis® and Farbaly®). The study was conducted during the spring-summer season 2015, in a commercial orchard located in the agricultural area of Cerignola (Foggia district, Apulian region, Southern Italy). Eight years old apricot trees, cv ‘Orange Rubis’ and ‘Farbaly®’, were used. The experimental data recorded during the experimental trial were: shoot length, total number of flower buds, flower buds drop and time of flowering and fruit set. Total yield of fruits per tree and quality parameters were determined. Experimental data showed some specific differences among the biostimulant treatments. Concerning the yield of ‘Orange Rubis’, except for the Bion treatment, the other three biostimulant treatments showed a tendentially lower values than the control. The yield of ‘Farbaly’ was lower for the Bion and Hendophyt treatments, higher for the Ergostim treatment, when compared with the yield of the control untreated. Concerning the soluble solids content, the juice of ‘Farbaly’ fruits had always higher content than that of ‘Orange Rubis’. Particularly, the Bion and the Hendophyt treatments showed in both harvest values tendentially higher than the control. Differently, the four biostimulant treatments did not affect significantly this parameter in ‘Orange Rubis’. With regard to the fruit firmness, some differences were observed between the two harvest dates and among the four biostimulant treatments. At the first harvest date, ‘Orange Rubis’ treated with Bion and Hendophyt biostimulants showed texture values tendentially lower than the control. Instead, ‘Farbaly’ for all the biostimulant treatments showed fruit firmness values significantly lower than the control. At the second harvest, almost all the biostimulants treatments in both ‘Orange Rubis’ and ‘Farbaly’ cultivar showed values lower than the control. Only ‘Farbaly’ treated with Radicon showed higher value in comparison to the control.

Keywords: apricot, fruit quality, growth, organic natural fertilizer

Procedia PDF Downloads 326
1262 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery

Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović

Abstract:

Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.

Keywords: capsaicin, in vitro, patch, RP-LC, transdermal

Procedia PDF Downloads 227
1261 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)

Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami

Abstract:

Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.

Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives

Procedia PDF Downloads 221
1260 Synthesis and Characterization of pH-Responsive Nanocarriers Based on POEOMA-b-PDPA Block Copolymers for RNA Delivery

Authors: Bruno Baptista, Andreia S. R. Oliveira, Patricia V. Mendonca, Jorge F. J. Coelho, Fani Sousa

Abstract:

Drug delivery systems are designed to allow adequate protection and controlled delivery of drugs to specific locations. These systems aim to reduce side effects and control the biodistribution profile of drugs, thus improving therapeutic efficacy. This study involved the synthesis of polymeric nanoparticles, based on amphiphilic diblock copolymers, comprising a biocompatible, poly (oligo (ethylene oxide) methyl ether methacrylate (POEOMA) as hydrophilic segment and a pH-sensitive block, the poly (2-diisopropylamino)ethyl methacrylate) (PDPA). The objective of this work was the development of polymeric pH-responsive nanoparticles to encapsulate and carry small RNAs as a model to further develop non-coding RNAs delivery systems with therapeutic value. The responsiveness of PDPA to pH allows the electrostatic interaction of these copolymers with nucleic acids at acidic pH, as a result of the protonation of the tertiary amine groups of this polymer at pH values below its pKa (around 6.2). Initially, the molecular weight parameters and chemical structure of the block copolymers were determined by size exclusion chromatography (SEC) and nuclear magnetic resonance (1H-NMR) spectroscopy, respectively. Then, the complexation with small RNAs was verified, generating polyplexes with sizes ranging from 300 to 600 nm and with encapsulation efficiencies around 80%, depending on the molecular weight of the polymers, their composition, and concentration used. The effect of pH on the morphology of nanoparticles was evaluated by scanning electron microscopy (SEM) being verified that at higher pH values, particles tend to lose their spherical shape. Since this work aims to develop systems for the delivery of non-coding RNAs, studies on RNA protection (contact with RNase, FBS, and Trypsin) and cell viability were also carried out. It was found that they induce some protection against constituents of the cellular environment and have no cellular toxicity. In summary, this research work contributes to the development of pH-sensitive polymers, capable of protecting and encapsulating RNA, in a relatively simple and efficient manner, to further be applied on drug delivery to specific sites where pH may have a critical role, as it can occur in several cancer environments.

Keywords: drug delivery systems, pH-responsive polymers, POEOMA-b-PDPA, small RNAs

Procedia PDF Downloads 259