Search results for: rule curve
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1804

Search results for: rule curve

304 A Linguistic Analysis of the Inconsistencies in the Meaning of Some -er Suffix Morphemes

Authors: Amina Abubakar

Abstract:

English like any other language is rich by means of arbitrary, conventional, symbols which lend it to lot of inconsistencies in spelling, phonology, syntax, and morphology. The research examines the irregularities prevalent in the structure and meaning of some ‘er’ lexical items in English and its implication to vocabulary acquisition. It centers its investigation on the derivational suffix ‘er’, which changes the grammatical category of word. English language poses many challenges to Second Language Learners because of its irregularities, exceptions, and rules. One of the meaning of –er derivational suffix is someone or somebody who does something. This rule often confuses the learners when they meet with the exceptions in normal discourse. The need to investigate instances of such inconsistencies in the formation of –er words and the meanings given to such words by the students motivated this study. For this purpose, some senior secondary two (SS2) students in six randomly selected schools in the metropolis were provided a large number of alphabetically selected ‘er’ suffix ending words, The researcher opts for a test technique, which requires them to provide the meaning of the selected words with- er. The marking of the test was scored on the scale of 1-0, where correct formation of –er word and meaning is scored one while wrong formation and meaning is scored zero. The number of wrong and correct formations of –er words meaning were calculated using percentage. The result of this research shows that a large number of students made wrong generalization of the meaning of the selected -er ending words. This shows how enormous the inconsistencies are in English language and how are affect the learning of English. Findings from the study revealed that though students mastered the basic morphological rules but the errors are generally committed on those vocabulary items that are not frequently in use. The study arrives at this conclusion from the survey of their textbook and their spoken activities. Therefore, the researcher recommends that there should be effective reappraisal of language teaching through implementation of the designed curriculum to reflect on modern strategies of teaching language, identification, and incorporation of the exceptions in rigorous communicative activities in language teaching, language course books and tutorials, training and retraining of teachers on the strategies that conform to the new pedagogy.

Keywords: ESL(English as a second language), derivational morpheme, inflectional morpheme, suffixes

Procedia PDF Downloads 355
303 Protecting the Health of Astronauts: Enhancing Occupational Health Monitoring and Surveillance for Former NASA Astronauts to Understand Long-Term Outcomes of Spaceflight-Related Exposures

Authors: Meredith Rossi, Lesley Lee, Mary Wear, Mary Van Baalen, Bradley Rhodes

Abstract:

The astronaut community is unique, and may be disproportionately exposed to occupational hazards not commonly seen in other communities. The extent to which the demands of the astronaut occupation and exposure to spaceflight-related hazards affect the health of the astronaut population over the life course is not completely known. A better understanding of the individual, population, and mission impacts of astronaut occupational exposures is critical to providing clinical care, targeting occupational surveillance efforts, and planning for future space exploration. The ability to characterize the risk of latent health conditions is a significant component of this understanding. Provision of health screening services to active and former astronauts ensures individual, mission, and community health and safety. Currently, the NASA-Johnson Space Center (JSC) Flight Medicine Clinic (FMC) provides extensive medical monitoring to active astronauts throughout their careers. Upon retirement, astronauts may voluntarily return to the JSC FMC for an annual preventive exam. However, current retiree monitoring includes only selected screening tests, representing an opportunity for augmentation. The potential long-term health effects of spaceflight demand an expanded framework of testing for former astronauts. The need is two-fold: screening tests widely recommended for other aging populations are necessary to rule out conditions resulting from the natural aging process (e.g., colonoscopy, mammography); and expanded monitoring will increase NASA’s ability to better characterize conditions resulting from astronaut occupational exposures. To meet this need, NASA has begun an extensive exploration of the overall approach, cost, and policy implications of expanding the medical monitoring of former NASA astronauts under the Astronaut Occupational Health program. Increasing the breadth of monitoring services will ultimately enrich the existing evidence base of occupational health risks to astronauts. Such an expansion would therefore improve the understanding of the health of the astronaut population as a whole, and the ability to identify, mitigate, and manage such risks in preparation for deep space exploration missions.

Keywords: astronaut, long-term health, NASA, occupational health, surveillance

Procedia PDF Downloads 512
302 The Real Consignee: An Exploratory Study of the True Party who is Entitled to Receive Cargo under Bill of Lading

Authors: Mojtaba Eshraghi Arani

Abstract:

According to the international conventions for the carriage of goods by sea, the consignee is the person who is entitled to take delivery of the cargo from the carrier. Such a person is usually named in the relevant box of the bill of lading unless the latter is issued “To Order” or “To Bearer”. However, there are some cases in which the apparent consignee, as above, was not intended to take delivery of cargo, like the L/C issuing bank or the freight forwarder who are named as consignee only for the purpose of security or acceleration of transit process. In such cases as well as the BL which is issued “To Order”, the so-called “real consignee” can be found out in the “Notify Party” box. The dispute revolves around the choice between apparent consignee and real consignee for being entitled not only to take delivery of the cargo but also to sue the carrier for any damages or loss. While it is a generally accepted rule that only the apparent consignee shall be vested with such rights, some courts like France’s Cour de Cassation have declared that the “Notify Party”, as the real consignee, was entitled to sue the carrier and in some cases, the same court went far beyond and permitted the real consignee to take suit even where he was not mentioned on the BL as a “Notify Party”. The main argument behind such reasoning is that the real consignee is the person who suffered the loss and thus had a legitimate interest in bringing action; of course, the real consignee must prove that he incurred a loss. It is undeniable that the above-mentioned approach is contrary to the position of the international conventions on the express definition of consignee. However, international practice has permitted the use of BL in a different way to meet the business requirements of banks, freight forwarders, etc. Thus, the issue is one of striking a balance between the international conventions on the one hand and existing practices on the other hand. While the latest convention applicable for sea transportation, i.e., the Rotterdam Rules, dealt with the comparable issue of “shipper” and “documentary shipper”, it failed to cope with the matter being discussed. So a new study is required to propose the best solution for amending the current conventions for carriage of goods by sea. A qualitative method with the concept of interpretation of data collection has been used in this article. The source of the data is the analysis of domestic and international regulations and cases. It is argued in this manuscript that the judge is not allowed to recognize any one as real consignee, other than the person who is mentioned in the “Consingee” box unless the BL is issued “To Order” or “To Bearer”. Moreover, the contract of carriage is independent of the sale contract and thus, the consignee must be determined solely based on the facts of the BL itself, like “Notify Party” and not any other contract or document.

Keywords: real consignee, cargo, delivery, to order, notify the party

Procedia PDF Downloads 55
301 In-House Fatty Meal Cholescintigraphy as a Screening Tool in Patients Presenting with Dyspepsia

Authors: Avani Jain, S. Shelley, M. Indirani, Shilpa Kalal, Jaykanth Amalachandran

Abstract:

Aim: To evaluate the prevalence of gall bladder dysfunction in patients with dyspepsia using In-House fatty meal cholescintigraphy. Materials & Methods: This study is a prospective cohort study. 59 healthy volunteers with no dyspeptic complaints and negative ultrasound and endoscopy were recruited in study. 61 patients having complaint of dyspepsia for duration of more than 6 months were included. All of them underwent 99mTc-Mebrofenin fatty meal cholescintigraphy following a standard protocol. Dynamic acquisitions were acquired for 120 minutes with an In-House fatty meal being given at 45th minute. Gall bladder emptying kinetics was determined with gall bladder ejection fractions (GBEF) calculated at 30minutes, 45minutes and at 60 minutes (30min, 45min & 60 min). Standardization of fatty meal was done for volunteers. Receiver operating characteristic (ROC) analysis was used assess the diagnostic accuracy of 3 time points (30min, 45min & 60 min) used for measuring gall bladder emptying. On the basis of cut off derived from volunteers, the patients were assessed for gall bladder dysfunction. Results: In volunteers, the GBEF at 30 min was 74.42±8.26 % (mean ±SD), at 45 min was 82.61 ± 6.5 % and at 60 min was 89.37±4.48%, compared to patients where at 30min it was 33.73±22.87%, at 45 min it was 43.03±26.97% and at 60 min it was 51.85±29.60%. The lower limit of GBEF in volunteers at 30 min was 60%, 45 min was 69% and at 60 min was 81%. ROC analysis showed that area under curve was largest for 30 min GBEF (0.952; 95% CI = 0.914-0.989) and that all the 3 measures were statistically significant (p < 0.005). Majority of the volunteers had 74% of gall bladder emptying by 30 minutes; hence it was taken as an optimum cutoff time to assess gall bladder contraction. > 60% GBEF at 30 min post fatty meal was considered as normal and < 60% GBEF as indicative of gall bladder dysfunction. In patients, various causes for dyspepsia were identified: GB dysfunction (63.93%), Peptic ulcer (8.19 %), Gastroesophageal reflux disease (8.19%), Gastritis (4.91%). In 18.03% of cases GB dysfunction coexisted with other gastrointestinal conditions. The diagnosis of functional dyspepsia was made in 14.75% of cases. Conclusions: Gall bladder dysfunction contributes significantly to the causation of dyspepsia. It could coexist with various other gastrointestinal diseases. Fatty meal was well tolerated and devoid of any side effects. Many patients who are labeled as functional dyspeptics could actually have gall bladder dysfunction. Hence as an adjunct to ultrasound and endoscopy, fatty meal cholescintigraphy can also be used as a screening modality in characterization of dyspepsia.

Keywords: in-house fatty meal, choescintigraphy, dyspepsia, gall bladder ejection fraction, functional dyspepsia

Procedia PDF Downloads 488
300 Competitivity in Procurement Multi-Unit Discrete Clock Auctions: An Experimental Investigation

Authors: Despina Yiakoumi, Agathe Rouaix

Abstract:

Laboratory experiments were run to investigate the impact of different design characteristics of the auctions, which have been implemented to procure capacity in the UK’s reformed electricity markets. The experiment studies competition among bidders in procurement multi-unit discrete descending clock auctions under different feedback policies and pricing rules. Theory indicates that feedback policy in combination with the two common pricing rules; last-accepted bid (LAB) and first-rejected bid (FRB), could affect significantly the auction outcome. Two information feedback policies regarding the bidding prices of the participants are considered; with feedback and without feedback. With feedback, after each round participants are informed of the number of items still in the auction and without feedback, after each round participants have no information about the aggregate supply. Under LAB, winning bidders receive the amount of the highest successful bid and under the FRB the winning bidders receive the lowest unsuccessful bid. Based on the theoretical predictions of the alternative auction designs, it was decided to run three treatments. First treatment considers LAB with feedback; second treatment studies LAB without feedback; third treatment investigates FRB without feedback. Theoretical predictions of the game showed that under FRB, the alternative feedback policies are indifferent to the auction outcome. Preliminary results indicate that LAB with feedback and FRB without feedback achieve on average higher clearing prices in comparison to the LAB treatment without feedback. However, the clearing prices under LAB with feedback and FRB without feedback are on average lower compared to the theoretical predictions. Although under LAB without feedback theory predicts the clearing price will drop to the competitive equilibrium, experimental results indicate that participants could still engage in cooperative behavior and drive up the price of the auction. It is showed, both theoretically and experimentally, that the pricing rules and the feedback policy, affect the bidding competitiveness of the auction by providing opportunities to participants to engage in cooperative behavior and exercise market power. LAB without feedback seems to be less vulnerable to market power opportunities compared to the alternative auction designs. This could be an argument for the use of LAB pricing rule in combination with limited feedback in the UK capacity market in an attempt to improve affordability for consumers.

Keywords: descending clock auctions, experiments, feedback policy, market design, multi-unit auctions, pricing rules, procurement auctions

Procedia PDF Downloads 280
299 Properties of Magnesium-Based Hydrogen Storage Alloy Added with Palladium and Titanium Hydride

Authors: Jun Ying Lin, Tzu Hsiang Yen, Cha'o Kuang Chen

Abstract:

Nowadays, the great majority believe that there is great potentiality in hydrogen storage alloy storing hydrogen by physical and chemical absorption. However, the hydrogen storage alloy is limited by high operation temperature. Scientists find that adding transition elements can improve the properties of hydrogen storage alloy. In this research, outstanding improvements of kinetic and thermal properties are given by the addition of Palladium and Titanium hydride to Magnesium-based hydrogen storage alloy. Magnesium-based alloy is the main material, into which TiH2 / Pd are added separately. Following that, materials are milled by a Planetary Ball Miller at 650 rpm. TGA/DSC and PCT measure the capacity, spending time and temperature of abs/des-orption. Additionally, SEM and XRD analyze the structures and components of material. It is clearly shown that Pd is beneficial to kinetic properties. 2MgH2-0.1Pd has the highest capacity of all the alloys listed, approximately 5.5 wt%. Secondly, there are not any new Ti-related compounds found from XRD analysis. Thus, TiH2, considered as the catalyst, leads to the condition of 2MgH2-TiH2 and 2MgH2-TiH2-0.1Pd efficiently absorbing hydrogen in low temperature. 2MgH2-TiH2 can reach roughly 3.0 wt% in 82.4 minutes at 50°C and 8 minutes at 100°C, while2MgH2-TiH2-0.1Pd can reach 2.0 wt% in 400 minutes at 50°C and in 48 minutes at 100°C. The lowest temperature of 2MgH2-0.1Pd and 2MgH2-TiH2 is similar (320°C), otherwise the lowest temperature of 2MgH2-TiH2-0.1Pd decrease by 20°C. From XRD, it can be observed that PdTi2 and Pd3Ti are produced by mechanical alloying when adding Pd as well as TiH2 into MgH2. Due to the synergistic effects between Pd and TiH2, 2MgH2-TiH2-0.1Pd owns the lowest dehydrogenation temperature. Furthermore, the Pressure-Composition-Temperature (PCT) curve of 2MgH2-TiH2-0.1Pd is measured at different temperature, 370°C, 350°C, 320°C and 300°C separately. The plateau pressure is given form the PCT curves above. In accordance to different plateau pressures, enthalpy and entropy in the Van’t Hoff equation can be solved. In 2MgH2-TiH2-0.1Pd, the enthalpy is 74.9 KJ/mol and the entropy is 122.9 J/mol. Activation means that hydrogen storage alloy undergoes repeat abs/des-orpting processes. It plays an important role in the abs/des-orption. Activation shortens the abs/des-orption time because of the increase in surface area. From SEM, it is clear that the grain size and surface become smaller and rougher

Keywords: hydrogen storage materials, magnesium hydride, abs-/des-orption performance, Plateau pressure

Procedia PDF Downloads 243
298 Effects of Nutrient Source and Drying Methods on Physical and Phytochemical Criteria of Pot Marigold (Calendula offiCinalis L.) Flowers

Authors: Leila Tabrizi, Farnaz Dezhaboun

Abstract:

In order to study the effect of plant nutrient source and different drying methods on physical and phytochemical characteristics of pot marigold (Calendula officinalis L., Asteraceae) flowers, a factorial experiment was conducted based on completely randomized design with three replications in Research Laboratory of University of Tehran in 2010. Different nutrient sources (vermicompost, municipal waste compost, cattle manure, mushroom compost and control) which were applied in a field experiment for flower production and different drying methods including microwave (300, 600 and 900 W), oven (60, 70 and 80oC) and natural-shade drying in room temperature, were tested. Criteria such as drying kinetic, antioxidant activity, total flavonoid content, total phenolic compounds and total carotenoid of flowers were evaluated. Results indicated that organic inputs as nutrient source for flowers had no significant effects on quality criteria of pot marigold except of total flavonoid content, while drying methods significantly affected phytochemical criteria. Application of microwave 300, 600 and 900 W resulted in the highest amount of total flavonoid content, total phenolic compounds and antioxidant activity, respectively, while oven drying caused the lowest amount of phytochemical criteria. Also, interaction effect of nutrient source and drying method significantly affected antioxidant activity in which the highest amount of antioxidant activity was obtained in combination of vermicompost and microwave 900 W. In addition, application of vermicompost combined with oven drying at 60oC caused the lowest amount of antioxidant activity. Based on results of drying trend, microwave drying showed a faster drying rate than those oven and natural-shade drying in which by increasing microwave power and oven temperature, time of flower drying decreased whereas slope of moisture content reduction curve showed accelerated trend.

Keywords: drying kinetic, medicinal plant, organic fertilizer, phytochemical criteria

Procedia PDF Downloads 324
297 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh

Authors: Zahid Khalil, Saad Ul Haque, Asif Khan

Abstract:

Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).

Keywords: Remote sensing, GIS, AHP, RWH

Procedia PDF Downloads 368
296 Physicochemical Properties of Pea Protein Isolate (PPI)-Starch and Soy Protein Isolate (SPI)-Starch Nanocomplexes Treated by Ultrasound at Different pH Values

Authors: Gulcin Yildiz, Hao Feng

Abstract:

Soybean proteins are the most widely used and researched proteins in the food industry. Due to soy allergies among consumers, however, alternative legume proteins having similar functional properties have been studied in recent years. These alternative proteins are also expected to have a price advantage over soy proteins. One such protein that has shown good potential for food applications is pea protein. Besides the favorable functional properties of pea protein, it also contains fewer anti-nutritional substances than soy protein. However, a comparison of the physicochemical properties of pea protein isolate (PPI)-starch nanocomplexes and soy protein isolate (SPI)-starch nanocomplexes treated by ultrasound has not been well documented. This study was undertaken to investigate the effects of ultrasound treatment on the physicochemical properties of PPI-starch and SPI-starch nanocomplexes. Pea protein isolate (85% pea protein) provided by Roquette (Geneva, IL, USA) and soy protein isolate (SPI, Pro-Fam® 955) obtained from the Archer Daniels Midland Company were adjusted to different pH levels (2-12) and treated with 5 minutes of ultrasonication (100% amplitude) to form complexes with starch. The soluble protein content was determined by the Bradford method using BSA as the standard. The turbidity of the samples was measured using a spectrophotometer (Lambda 1050 UV/VIS/NIR Spectrometer, PerkinElmer, Waltham, MA, USA). The volume-weighted mean diameters (D4, 3) of the soluble proteins were determined by dynamic light scattering (DLS). The emulsifying properties of the proteins were evaluated by the emulsion stability index (ESI) and emulsion activity index (EAI). Both the soy and pea protein isolates showed a U-shaped solubility curve as a function of pH, with a high solubility above the isoelectric point and a low one below it. Increasing the pH from 2 to 12 resulted in increased solubility for both the SPI and PPI-starch complexes. The pea nanocomplexes showed greater solubility than the soy ones. The SPI-starch nanocomplexes showed better emulsifying properties determined by the emulsion stability index (ESI) and emulsion activity index (EAI) due to SPI’s high solubility and high protein content. The PPI had similar or better emulsifying properties at certain pH values than the SPI. The ultrasound treatment significantly decreased the particle sizes of both kinds of nanocomplex. For all pH levels with both proteins, the droplet sizes were found to be lower than 300 nm. The present study clearly demonstrated that applying ultrasonication under different pH conditions significantly improved the solubility and emulsify¬ing properties of the SPI and PPI. The PPI exhibited better solubility and emulsifying properties than the SPI at certain pH levels

Keywords: emulsifying properties, pea protein isolate, soy protein isolate, ultrasonication

Procedia PDF Downloads 294
295 Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Authors: Rajinder Singh, Ram Valluru

Abstract:

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Keywords: actuarial loss reserving techniques, logistic regression, parametric function, volatility

Procedia PDF Downloads 108
294 Validation of the Arabic Version of the Positive and Negative Syndrome Scale (PANSS)

Authors: Arij Yehya, Suhaila Ghuloum, Abdlmoneim Abdulhakam, Azza Al-Mujalli, Mark Opler, Samer Hammoudeh, Yahya Hani, Sundus Mari, Reem Elsherbiny, Ziyad Mahfoud, Hassen Al-Amin

Abstract:

Introduction: The Positive and Negative Syndrome Scale (PANSS) is a valid instrument developed by Kay and colleagues6 to assess symptoms of patients with schizophrenia. It consists of 30 items that factor the symptoms into three subscales: positive, negative and general psychopathology. This scale has been translated and validated in several languages. Objective: This study aims to determine the validity and psychometric properties of the Arabic version of the PANSS. Methods: A standardized translation and cultural adaptation method was adopted. Patients diagnosed with schizophrenia (n=98), according to psychiatrist’s diagnosis based on DSM-IV criteria, were recruited from the Psychiatry Department at Rumailah Hospital, Qatar. A first rater confirmed the diagnosis using the Arabic version of Mini International Neuropsychiatric Interview (MINI 6). A second and independent rater-administered the Arabic version of PANSS. Also, a control group (n=101), with no history of psychiatric disorder was recruited from the family and friends of the patients and from primary health care centers in Qatar. Results: There were more males than females in our sample of patients with schizophrenia (68.9% and 31.6%, respectively). On the other hand, in the control group the number of females outweighed that of males (58.4% and 41.6% respectively). The scale had a good internal consistency with Cronbach’s alpha 0.91. There was a significant difference between the scores on the three subscales of the PANSS. Patients with schizophrenia scored significantly higher (p<.0001) than the control subjects on subscales for positive symptoms 20.01(SD=7.21) and 7.30(SD=1.38), negative symptoms 18.89(SD=8.88) and 7.37(SD=2.38) and general psychopathology 34.41 (SD=11.56) and 16.93 (SD=3.93), respectively. Factor analysis and ROC curve were carried out to further test the psychometrics of the scale. Conclusions: The Arabic version of PANSS is a reliable and valid tool to assess both positive and negative symptoms of patients with schizophrenia in a balanced manner. In addition to providing the Arab population with a standardized tool to monitor symptoms of schizophrenia, this version provides a gateway to compare the prevalence of positive and negative symptoms in the Arab world which can be compared to others done elsewhere.

Keywords: Arabic version, assessment, diagnosis, schizophrenia, validation

Procedia PDF Downloads 617
293 Influence of Bottom Ash on the Geotechnical Parameters of Clayey Soil

Authors: Tanios Saliba, Jad Wakim, Elie Awwad

Abstract:

Clayey soils exhibit undesirable problems in civil engineering project: poor bearing soil capacity, shrinkage, cracking, …etc. On the other hand, the increasing production of bottom ash and its disposal in an eco-friendly manner is a matter of concern. Soil stabilization using bottom ash is a new technic in the geo-environmental engineering. It can be used wherever a soft clayey soil is encountered in foundations or road subgrade, instead of using old technics such as cement-soil mixing. This new technology can be used for road embankments and clayey foundations platform (shallow or deep foundations) instead of replacing bad soil or using old technics which aren’t eco-friendly. Moreover, applying this new technic in our geotechnical engineering projects can reduce the disposal of the bottom ash problem which is getting bigger day after day. The research consists of mixing clayey soil with different percentages of bottom ash at different values of water content, and evaluates the mechanical properties of every mix: the percentages of bottom ash are 10% 20% 30% 40% and 50% with values of water content of 25% 35% and 45% of the mix’s weight. Before testing the different mixes, clayey soil’s properties were determined: Atterbeg limits, soil’s cohesion and friction angle and particle size distribution. In order to evaluate the mechanical properties and behavior of every mix, different tests are conducted: -Direct shear test in order to determine the cohesion and internal friction angle of every mix. -Unconfined compressive strength (stress strain curve) to determine mix’s elastic modulus and compressive strength. Soil samples are prepared in accordance with the ASTM standards, and tested at different times, in order to be able to emphasize the influence of the curing period on the variation of the mix’s mechanical properties and characteristics. As of today, the results obtained are very promising: the mix’s cohesion and friction angle vary in function of the bottom ash percentage, water content and curing period: the cohesion increases enormously before decreasing for a long curing period (values of mix’s cohesion are larger than intact soil’s cohesion) while internal friction angle keeps on increasing even when the curing period is 28 days (the tests largest curing period), which give us a better soil behavior: less cracks and better soil bearing capacity.

Keywords: bottom ash, Clayey soil, mechanical properties, tests

Procedia PDF Downloads 162
292 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls

Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin

Abstract:

The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.

Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis

Procedia PDF Downloads 204
291 Heuristic Approaches for Injury Reductions by Reduced Car Use in Urban Areas

Authors: Stig H. Jørgensen, Trond Nordfjærn, Øyvind Teige Hedenstrøm, Torbjørn Rundmo

Abstract:

The aim of the paper is to estimate and forecast road traffic injuries in the coming 10-15 years given new targets in urban transport policy and shifts of mode of transport, including injury cross-effects of mode changes. The paper discusses possibilities and limitations in measuring and quantifying possible injury reductions. Injury data (killed and seriously injured road users) from six urban areas in Norway from 1998-2012 (N= 4709 casualties) form the basis for estimates of changing injury patterns. For the coming period calculation of number of injuries and injury rates by type of road user (categories of motorized versus non-motorized) by sex, age and type of road are made. A prognosticated population increase (25 %) in total population within 2025 in the six urban areas will curb the proceeded fall in injury figures. However, policy strategies and measures geared towards a stronger modal shift from use of private vehicles to safer public transport (bus, train) will modify this effect. On the other side will door to door transport (pedestrians on their way to/from public transport nodes) imply a higher exposure for pedestrians (bikers) converting from private vehicle use (including fall accidents not registered as traffic accidents). The overall effect is the sum of these modal shifts in the increasing urban population and in addition diminishing return to the majority of road safety countermeasures has also to be taken into account. The paper demonstrates how uncertainties in the various estimates (prediction factors) on increasing injuries as well as decreasing injury figures may partly offset each other. The paper discusses road safety policy and welfare consequences of transport mode shift, including reduced use of private vehicles, and further environmental impacts. In this regard, safety and environmental issues will as a rule concur. However pursuing environmental goals (e.g. improved air quality, reduced co2 emissions) encouraging more biking may generate more biking injuries. The study was given financial grants from the Norwegian Research Council’s Transport Safety Program.

Keywords: road injuries, forecasting, reduced private care use, urban, Norway

Procedia PDF Downloads 218
290 Rainwater Harvesting and Management of Ground Water (Case Study Weather Modification Project in Iran)

Authors: Samaneh Poormohammadi, Farid Golkar, Vahideh Khatibi Sarabi

Abstract:

Climate change and consecutive droughts have increased the importance of using rainwater harvesting methods. One of the methods of rainwater harvesting and, in other words, the management of atmospheric water resources is the use of weather modification technologies. Weather modification (also known as weather control) is the act of intentionally manipulating or altering the weather. The most common form of weather modification is cloud seeding, which increases rain or snow, usually for the purpose of increasing the local water supply. Cloud seeding operations in Iran have been married since 1999 in central Iran with the aim of harvesting rainwater and reducing the effects of drought. In this research, we analyze the results of cloud seeding operations in the Simindashtplain in northern Iran. Rainwater harvesting with the help of cloud seeding technology has been evaluated through its effects on surface water and underground water. For this purpose, two different methods have been used to estimate runoff. The first method is the US Soil Conservation Service (SCS) curve number method. Another method, known as the reasoning method, has also been used. In order to determine the infiltration rate of underground water, the balance reports of the comprehensive water plan of the country have been used. In this regard, the study areas located in the target area of each province have been extracted by drawing maps of the influence coefficients of each area in the GIS software. It should be mentioned that the infiltration coefficients were taken from the balance sheet reports of the country's comprehensive water plan. Then, based on the area of each study area, the weighted average of the infiltration coefficient of the study areas located in the target area of each province is considered as the infiltration coefficient of that province. Results show that the amount of water extracted from the rain with the help of cloud seeding projects in Simindasht is as follows: an increase in runoff 63.9 million cubic meters (with SCS equation) or 51.2 million cubic meters (with logical equation) and an increase in ground water resources: 40.5 million cubic meters.

Keywords: rainwater harvesting, ground water, atmospheric water resources, weather modification, cloud seeding

Procedia PDF Downloads 91
289 Necessity of Recognition of Same-Sex Marriages and Civil Partnerships Concluded Abroad from Civil Status Registry Point of View

Authors: Ewa Kamarad

Abstract:

Recent problems with adopting the EU Regulation on matrimonial property regimes have clearly proven that Member States are unable to agree on the scope of the Regulation and, therefore, on the definitions of matrimonial property and marriage itself. Taking into account that the Regulation on the law applicable to divorce and legal separation, as well as the Regulation on matrimonial property regimes, were adopted in the framework of enhanced cooperation, it is evident that lack of a unified definition of marriage has very wide-ranging consequences. The main problem with the unified definition of marriage is that the EU is not entitled to adopt measures in the domain of material family law, as this area remains under the exclusive competence of the Member States. Because of that, the legislation on marriage in domestic legal orders of the various Member States is very different. These differences concern not only issues such as form of marriage or capacity to enter into marriage, but also the most basic matter, namely the core of the institution of marriage itself. Within the 28 Member States, we have those that allow both different-sex and same-sex marriages, those that have adopted special, separate institutions for same-sex couples, and those that allow only marriage between a man and a woman (e.g. Hungary, Latvia, Lithuania, Poland, Slovakia). Because of the freedom of movement within the European Union, it seems necessary to somehow recognize the civil effects of a marriage that was concluded in another Member State. The most crucial issue is how far that recognition should go. The thesis presented in the presentation is that, at an absolute minimum, the authorities of all Member States must recognize the civil status of the persons who enter into marriage in another Member State. Lack of such recognition might cause serious problems, both for the spouses and for other individuals. The authorities of some Member States may treat the marriage as if it does not exist because it was concluded under foreign law that defines marriage differently. Because of that, it is possible for the spouse to obtain a certificate of civil status stating that he or she is single and thus eligible to enter into marriage – despite being legally married under the law of another Member State. Such certificate can then be used in another country to serve as a proof of civil status. Eventually the lack of recognition can lead to so-called “international bigamy”. The biggest obstacle to recognition of marriages concluded under the law of another Member State that defines marriage differently is the impossibility of transcription of a foreign civil certificate in the case of such a marriage. That is caused by the rule requiring that a civil certificate issued (or transcribed) under one country's law can contain only records of legal institutions recognized by that country's legal order. The presentation is going to provide possible solutions to this problem.

Keywords: civil status, recognition of marriage, conflict of laws, private international law

Procedia PDF Downloads 217
288 Enhanced Kinetic Solubility Profile of Epiisopiloturine Solid Solution in Hipromellose Phthalate

Authors: Amanda C. Q. M. Vieira, Cybelly M. Melo, Camila B. M. Figueirêdo, Giovanna C. R. M. Schver, Salvana P. M. Costa, Magaly A. M. de Lyra, Ping I. Lee, José L. Soares-Sobrinho, Pedro J. Rolim-Neto, Mônica F. R. Soares

Abstract:

Epiisopiloturine (EPI) is a drug candidate that is extracted from Pilocarpus microphyllus and isolated from the waste of Pilocarpine. EPI has demonstrated promising schistosomicidal, leishmanicide, anti-inflammatory and antinociceptive activities, according to in vitro studies that have been carried out since 2009. However, this molecule shows poor aqueous solubility, which represents a problem for the release of the drug candidate and its absorption by the organism. The purpose of the present study is to investigate the extent of enhancement of kinetic solubility of a solid solution (SS) of EPI in hipromellose phthalate HP-55 (HPMCP), an enteric polymer carrier. SS was obtained by the solvent evaporation methodology, using acetone/methanol (60:40) as solvent system. Both EPI and polymer (drug loading 10%) were dissolved in this solvent until a clear solution was obtained, and then dried in oven at 60ºC during 12 hours, followed by drying in a vacuum oven for 4 h. The results show a considerable modification in the crystalline structure of the drug candidate. For instance, X-ray diffraction (XRD) shows a crystalline behavior for the EPI, which becomes amorphous for the SS. Polarized light microscopy, a more sensitive technique than XRD, also shows completely absence of crystals in SS sample. Differential Scanning Calorimetric (DSC) curves show no signal of EPI melting point in SS curve, indicating, once more, no presence of crystal in this system. Interaction between the drug candidate and the polymer were found in Infrared microscopy, which shows a carbonyl 43.3 cm-1 band shift, indicating a moderate-strong interaction between them, probably one of the reasons to the SS formation. Under sink conditions (pH 6.8), EPI SS had its dissolution performance increased in 2.8 times when compared with the isolated drug candidate. EPI SS sample provided a release of more than 95% of the drug candidate in 15 min, whereas only 45% of EPI (alone) could be dissolved in 15 min and 70% in 90 min. Thus, HPMCP demonstrates to have a good potential to enhance the kinetic solubility profile of EPI. Future studies to evaluate the stability of SS are required to conclude the benefits of this system.

Keywords: epiisopiloturine, hipromellose phthalate HP-55, pharmaceuticaltechnology, solubility

Procedia PDF Downloads 598
287 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 133
286 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps

Procedia PDF Downloads 404
285 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food

Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite

Abstract:

The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.

Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction

Procedia PDF Downloads 172
284 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 128
283 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution

Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud

Abstract:

In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.

Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch

Procedia PDF Downloads 398
282 Inversion of PROSPECT+SAIL Model for Estimating Vegetation Parameters from Hyperspectral Measurements with Application to Drought-Induced Impacts Detection

Authors: Bagher Bayat, Wouter Verhoef, Behnaz Arabi, Christiaan Van der Tol

Abstract:

The aim of this study was to follow the canopy reflectance patterns in response to soil water deficit and to detect trends of changes in biophysical and biochemical parameters of grass (Poa pratensis species). We used visual interpretation, imaging spectroscopy and radiative transfer model inversion to monitor the gradual manifestation of water stress effects in a laboratory setting. Plots of 21 cm x 14.5 cm surface area with Poa pratensis plants that formed a closed canopy were subjected to water stress for 50 days. In a regular weekly schedule, canopy reflectance was measured. In addition, Leaf Area Index (LAI), Chlorophyll (a+b) content (Cab) and Leaf Water Content (Cw) were measured at regular time intervals. The 1-D bidirectional canopy reflectance model SAIL, coupled with the leaf optical properties model PROSPECT, was inverted using hyperspectral measurements by means of an iterative optimization method to retrieve vegetation biophysical and biochemical parameters. The relationships between retrieved LAI, Cab, Cw, and Cs (Senescent material) with soil moisture content were established in two separated groups; stress and non-stressed. To differentiate the water stress condition from the non-stressed condition, a threshold was defined that was based on the laboratory produced Soil Water Characteristic (SWC) curve. All parameters retrieved by model inversion using canopy spectral data showed good correlation with soil water content in the water stress condition. These parameters co-varied with soil moisture content under the stress condition (Chl: R2= 0.91, Cw: R2= 0.97, Cs: R2= 0.88 and LAI: R2=0.48) at the canopy level. To validate the results, the relationship between vegetation parameters that were measured in the laboratory and soil moisture content was established. The results were totally in agreement with the modeling outputs and confirmed the results produced by radiative transfer model inversion and spectroscopy. Since water stress changes all parts of the spectrum, we concluded that analysis of the reflectance spectrum in the VIS-NIR-MIR region is a promising tool for monitoring water stress impacts on vegetation.

Keywords: hyperspectral remote sensing, model inversion, vegetation responses, water stress

Procedia PDF Downloads 207
281 Adsorption of Congo Red from Aqueous Solution by Raw Clay: A Fixed Bed Column Study

Authors: A. Ghribi, M. Bagane

Abstract:

The discharge of dye in industrial effluents is of great concern because their presence and accumulation have a toxic or carcinogenic effect on living species. The removals of such compounds at such low levels are a difficult problem. Physicochemical technique such as coagulation, flocculation, ozonation, reverse osmosis and adsorption on activated carbon, manganese oxide, silica gel and clay are among the methods employed. The adsorption process is an effective and attractive proposition for the treatment of dye contaminated wastewater. Activated carbon adsorption in fixed beds is a very common technology in the treatment of water and especially in processes of decolouration. However, it is expensive and the powdered one is difficult to be separated from aquatic system when it becomes exhausted or the effluent reaches the maximum allowable discharge level. The regeneration of exhausted activated carbon by chemical and thermal procedure is also expensive and results in loss of the sorbent. Dye molecules also have very high affinity for clay surfaces and are readily adsorbed when added to clay suspension. The elimination of the organic dye by clay was studied by serval researchers. The focus of this research was to evaluate the adsorption potential of the raw clay in removing congo red from aqueous solutions using a laboratory fixed-bed column. The continuous sorption process was conducted in this study in order to simulate industrial conditions. The effect of process parameters, such as inlet flow rate, adsorbent bed height and initial adsorbate concentration on the shape of breakthrough curves was investigated. A glass column with an internal diameter of 1.5 cm and height of 30 cm was used as a fixed-bed column. The pH of feed solution was set at 7.Experiments were carried out at different bed heights (5-20 cm), influent flow rates (1.6- 8 mL/min) and influent congo red concentrations (10-50 mg/L). The obtained results showed that the adsorption capacity increases with the bed depth and the initial concentration and it decreases at higher flow rate. The column regeneration was possible for four adsorption–desorption cycles. The clay column study states the value of the excellent adsorption capacity for the removal of congo red from aqueous solution. Uptake of congo red through a fixed-bed column was dependent on the bed depth, influent congo red concentration and flow rate.

Keywords: adsorption, breakthrough curve, clay, congo red, fixed bed column, regeneration

Procedia PDF Downloads 311
280 The Effects of Shift Work on Neurobehavioral Performance: A Meta Analysis

Authors: Thomas Vlasak, Tanja Dujlociv, Alfred Barth

Abstract:

Shift work is an essential element of modern labor, ensuring ideal conditions of service for today’s economy and society. Despite the beneficial properties, its impact on the neurobehavioral performance of exposed subjects remains controversial. This meta-analysis aims to provide first summarizing the effects regarding the association between shift work exposure and different cognitive functions. A literature search was performed via the databases PubMed, PsyINFO, PsyARTICLES, MedLine, PsycNET and Scopus including eligible studies until December 2020 that compared shift workers with non-shift workers regarding neurobehavioral performance tests. A random-effects model was carried out using Hedge’s g as a meta-analytical effect size with a restricted likelihood estimator to summarize the mean differences between the exposure group and controls. The heterogeneity of effect sizes was addressed by a sensitivity analysis using funnel plots, egger’s tests, p-curve analysis, meta-regressions, and subgroup analysis. The meta-analysis included 18 studies resulting in a total sample of 18,802 participants and 37 effect sizes concerning six different neurobehavioral outcomes. The results showed significantly worse performance in shift workers compared to non-shift workers in the following cognitive functions with g (95% CI): processing speed 0.16 (0.02 - 0.30), working memory 0.28 (0.51 - 0.50), psychomotor vigilance 0.21 (0.05 - 0.37), cognitive control 0.86 (0.45 - 1.27) and visual attention 0.19 (0.11 - 0.26). Neither significant moderating effects of publication year or study quality nor significant subgroup differences regarding type of shift or type of profession were indicated for the cognitive outcomes. These are the first meta-analytical findings that associate shift work with decreased cognitive performance in processing speed, working memory, psychomotor vigilance, cognitive control, and visual attention. Further studies should focus on a more homogenous measurement of cognitive functions, a precise assessment of experience of shift work and occupation types which are underrepresented in the current literature (e.g., law enforcement). In occupations where shift work is fundamental (e.g., healthcare, industries, law enforcement), protective countermeasures should be promoted for workers.

Keywords: meta-analysis, neurobehavioral performance, occupational psychology, shift work

Procedia PDF Downloads 98
279 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 52
278 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 429
277 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia

Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva

Abstract:

In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.

Keywords: value, risk, creation, problems, organization

Procedia PDF Downloads 266
276 Emergency Multidisciplinary Continuing Care Case Management

Authors: Mekroud Amel

Abstract:

Emergency departments are known for the workload, the variety of pathologies and the difficulties in their management with the continuous influx of patients The role of our service in the management of patients with two or three mild to moderate organ failures, involving several disciplines at the same time, as well as the effect of this management on the skills and efficiency of our team has been demonstrated Borderline cases between two or three or even more disciplines, with instability of a vital function, which have been successfully managed in the emergency room, the therapeutic procedures adopted, the consequences on the quality and level of care delivered by our team, as well as that the logistical consequences, and the pedagogical consequences are demonstrated. The consequences found are Positive on the emergency teams, in rare situations are negative Regarding clinical situations, it is the entanglement of hemodynamic distress with right, left or global participation, tamponade, low flow with acute pulmonary edema, and/or state of shock With respiratory distress with more or less profound hypoxemia, with haematosis disorder related to a bacterial or viral lung infection, pleurisy, pneumothorax, bronchoconstrictive crisis. With neurological disorders such as recent stroke, comatose state, or others With metabolic disorders such as hyperkalaemia renal insufficiency severe ionic disorders with accidents with anti vitamin K With or without septate effusion of one or more serous membranes with or without tamponade It’s a Retrospective, monocentric, descriptive study Period 05.01.2022 to 10.31.2022 the purpose of our work: Search for a statistically significant link between the type of moderate to severe pathology managed in the emergency room whose problems are multivisceral on the efficiency of the healthcare team and its level of care and optional care offered for patients Statistical Test used: Chi2 test to prove the significant link between the resolution of serious multidisciplinary cases in the emergency room and the effectiveness of the team in the management of complicated cases Search for a statistically significant link : The management of the most difficult clinical cases for organ specialties has given general practitioner emergency teams a great perspective and has been able to improve their efficiency in the face of emergencies received

Keywords: emergency care teams, management of patients with dysfunction of more than one organ, learning curve, quality of care

Procedia PDF Downloads 63
275 Unpredictable Territorial Interiority: Learning the Spatiality from the Early Space Learners

Authors: M. Mirza Y. Harahap

Abstract:

This paper explores the interiority of children’s territorialisation in domestic space context by looking at their affective relations with their surroundings. Examining its spatiality, the research focuses on the interactions that developed between the children and the things which exist in their house, specifically those which left traces, indicating the very arena of their territory. As early learners, the children whose mind and body are still in the development stage are hypothetically distinct in the way they territorialise the space. Rule, common sense and other form of common acceptances among the adults might not be relevant with their way on territorialising the space. Unpredictability-ness, inappropriateness, and unimaginableness hypothetically characterise their unique endeavour when territorialising the space. The purpose might even be insignificant, expressing their very development which unrestricted. This indicates how the interiority of children’s territorialisation in a domestic space context actually is. It would also implicate on a new way of seeing territory since territorialisation act has natural purpose: to aim the space and regard them as his/her own. Aiming to disclose the above territorialisation characteristics, this paper addresses a qualitative study which covers a comprehensive analysis as follow: 1) Collecting various territorial traces left from the children activities within their respective houses. Further within this stage, the data is categorised based on the territorial strategy and tactic. This stage would particularly result in the overall map of the children’s territorial interiority which expresses its focuses, range and ways; 2) Examining the interactions occurred between the children and the spatial elements within the house. Stressing on the affective relations, this stage revealed the immaterial aspect of the children’s territorialisation, thus disclosed the unseen spatial aspect of territorialisation; and 3) Synthesising the previous two stages. Correlating the results from the two stages would then help us to understand the children’s unpredictable, inappropriate and unimaginable territorial interiority. This would also help us to justify how the children learn the space through territorialisation act, its importance and its position in interiority conception. The discussed relation between the children and the houses that cover both its physical and imaginary entity as part of their overall dwelling space would also help us to have a better understanding towards specific spatial elements which are significant and undeniably important for children’s spatial learning process. Particularly for this last finding, it would also help us to determine what kind of spatial elements which are necessary to be existed in a house, thus help for design development purpose. Overall, the study in this paper would help us to broaden our mindset regarding the territory, dwelling, interiority and the overall interior architecture conception, promising a chance for further research within interior architecture field.

Keywords: children, interiority, relation, territory

Procedia PDF Downloads 122