Search results for: estimated%20model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2284

Search results for: estimated%20model

394 A Study of Life Expectancy in an Urban Set up of North-Eastern India under Dynamic Consideration Incorporating Cause Specific Mortality

Authors: Mompi Sharma, Labananda Choudhury, Anjana M. Saikia

Abstract:

Background: The period life table is entirely based on the assumption that the mortality patterns of the population existing in the given period will persist throughout their lives. However, it has been observed that the mortality rate continues to decline. As such, if the rates of change of probabilities of death are considered in a life table then we get a dynamic life table. Although, mortality has been declining in all parts of India, one may be interested to know whether these declines had appeared more in an urban area of underdeveloped regions like North-Eastern India. So, attempt has been made to know the mortality pattern and the life expectancy under dynamic scenario in Guwahati, the biggest city of North Eastern India. Further, if the probabilities of death changes then there is a possibility that its different constituent probabilities will also change. Since cardiovascular disease (CVD) is the leading cause of death in Guwahati. Therefore, an attempt has also been made to formulate dynamic cause specific death ratio and probabilities of death due to CVD. Objectives: To construct dynamic life table for Guwahati for the year 2011 based on the rates of change of probabilities of death over the previous 10 and 25 years (i.e.,2001 and 1986) and to compute corresponding dynamic cause specific death ratio and probabilities of death due to CVD. Methodology and Data: The study uses the method proposed by Denton and Spencer (2011) to construct dynamic life table for Guwahati. So, the data from the Office of the Birth and Death, Guwahati Municipal Corporation for the years 1986, 2001 and 2011 are taken. The population based data are taken from 2001 and 2011 census (India). However, the population data for 1986 has been estimated. Also, the cause of death ratio and probabilities of death due to CVD are computed for the aforementioned years and then extended to dynamic set up for the year 2011 by considering the rates of change of those probabilities over the previous 10 and 25 years. Findings: The dynamic life expectancy at birth (LEB) for Guwahati is found to be higher than the corresponding values in the period table by 3.28 (5.65) years for males and 8.30 (6.37) years for females during the period of 10 (25) years. The life expectancies under dynamic consideration in all the other age groups are also seen higher than the usual life expectancies, which may be possible due to gradual decline in probabilities of death since 1986-2011. Further, a continuous decline has also been observed in death ratio due to CVD along with cause specific probabilities of death for both sexes. As a consequence, dynamic cause of death probability due to CVD is found to be less in comparison to usual procedure. Conclusion: Since incorporation of changing mortality rates in period life table for Guwahati resulted in higher life expectancies and lower probabilities of death due to CVD, this would possibly bring out the real situation of deaths prevailing in the city.

Keywords: cause specific death ratio, cause specific probabilities of death, dynamic, life expectancy

Procedia PDF Downloads 220
393 Pooled Analysis of Three School-Based Obesity Interventions in a Metropolitan Area of Brazil

Authors: Rosely Sichieri, Bruna K. Hassan, Michele Sgambato, Barbara S. N. Souza, Rosangela A. Pereira, Edna M. Yokoo, Diana B. Cunha

Abstract:

Obesity is increasing at a fast rate in low and middle-income countries where few school-based obesity interventions have been conducted. Results of obesity prevention studies are still inconclusive mainly due to underestimation of sample size in cluster-randomized trials and overestimation of changes in body mass index (BMI). The pooled analysis in the present study overcomes these design problems by analyzing 4,448 students (mean age 11.7 years) from three randomized behavioral school-based interventions, conducted in public schools of the metropolitan area of Rio de Janeiro, Brazil. The three studies focused on encouraging students to change their drinking and eating habits over one school year, with monthly 1-h sessions in the classroom. Folders explaining the intervention program and suggesting the participation of the family, such as reducing the purchase of sodas were sent home. Classroom activities were delivered by research assistants in the first two interventions and by the regular teachers in the third one, except for culinary class aimed at developing cooking skills to increase healthy eating choices. The first intervention was conducted in 2005 with 1,140 fourth graders from 22 public schools; the second, with 644 fifth graders from 20 public schools in 2010; and the last one, with 2,743 fifth and sixth graders from 18 public schools in 2016. The result was a non-significant change in BMI after one school year of positive changes in dietary behaviors associated with obesity. Pooled intention-to-treat analysis using linear mixed models was used for the overall and subgroup analysis by BMI status, sex, and race. The estimated mean BMI changes were from 18.93 to 19.22 in the control group and from 18.89 to 19.19 in the intervention group; with a p-value of change over time of 0.94. Control and intervention groups were balanced at baseline. Subgroup analyses were statistically and clinically non-significant, except for the non-overweight/obese group with a 0.05 reduction of BMI comparing the intervention with control. In conclusion, this large pooled analysis showed a very small effect on BMI only in the normal weight students. The results are in line with many of the school-based initiatives that have been promising in relation to modifying behaviors associated with obesity but of no impact on excessive weight gain. Changes in BMI may require great changes in energy balance that are hard to achieve in primary prevention at school level.

Keywords: adolescents, obesity prevention, randomized controlled trials, school-based study

Procedia PDF Downloads 140
392 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 231
391 An Observational Study Assessing the Baseline Communication Behaviors among Healthcare Professionals in an Inpatient Setting in Singapore

Authors: Pin Yu Chen, Puay Chuan Lee, Yu Jen Loo, Ju Xia Zhang, Deborah Teo, Jack Wei Chieh Tan, Biauw Chi Ong

Abstract:

Background: Synchronous communication, such as telephone calls, remains the standard communication method between nurses and other healthcare professionals in Singapore public hospitals despite advances in asynchronous technological platforms, such as instant messaging. Although miscommunication is one of the most common causes of lapses in patient care, there is a scarcity of research characterizing baseline inter-professional healthcare communications in a hospital setting due to logistic difficulties. Objective: This study aims to characterize the frequency and patterns of communication behaviours among healthcare professionals. Methods: The one-week observational study was conducted on Monday through Sunday at the nursing station of a cardiovascular medicine and cardiothoracic surgery inpatient ward at the National Heart Centre Singapore. Subjects were shadowed by two physicians for sixteen hours or consecutive morning and afternoon nursing shifts. Communications were logged and characterized by type, duration, caller, and recipient. Results: A total of 1,023 communication events involving the attempted use of the common telephones at the nursing station were logged over a period of one week, corresponding to a frequency of one event every 5.45 minutes (SD 6.98, range 0-56 minutes). Nurses initiated the highest proportion of outbound calls (38.7%) via the nursing station common phone. A total of 179 face-to-face communications (17.5%), 362 inbound calls (35.39%), 481 outbound calls (47.02%), and 1 emergency alert (0.10%) were captured. Average response time for task-oriented communications was 159 minutes (SD 387.6, range 86-231). Approximately 1 in 3 communications captured aimed to clarify patient-related information. The total duration of time spent on synchronous communication events over one week, calculated from total inbound and outbound calls, was estimated to be a total of 7 hours. Conclusion: The results of our study showed that there is a significant amount of time spent on inter-professional healthcare communications via synchronous channels. Integration of patient-related information and use of asynchronous communication channels may help to reduce the redundancy of communications and clarifications. Future studies should explore the use of asynchronous mobile platforms to address the inefficiencies observed in healthcare communications.

Keywords: healthcare communication, healthcare management, nursing, qualitative observational study

Procedia PDF Downloads 193
390 Modelling Flood Events in Botswana (Palapye) for Protecting Roads Structure against Floods

Authors: Thabo M. Bafitlhile, Adewole Oladele

Abstract:

Botswana has been affected by floods since long ago and is still experiencing this tragic event. Flooding occurs mostly in the North-West, North-East, and parts of Central district due to heavy rainfalls experienced in these areas. The torrential rains destroyed homes, roads, flooded dams, fields and destroyed livestock and livelihoods. Palapye is one area in the central district that has been experiencing floods ever since 1995 when its greatest flood on record occurred. Heavy storms result in floods and inundation; this has been exacerbated by poor and absence of drainage structures. Since floods are a part of nature, they have existed and will to continue to exist, hence more destruction. Furthermore floods and highway plays major role in erosion and destruction of roads structures. Already today, many culverts, trenches, and other drainage facilities lack the capacity to deal with current frequency for extreme flows. Future changes in the pattern of hydro climatic events will have implications for the design and maintenance costs of roads. Increase in rainfall and severe weather events can affect the demand for emergent responses. Therefore flood forecasting and warning is a prerequisite for successful mitigation of flood damage. In flood prone areas like Palapye, preventive measures should be taken to reduce possible adverse effects of floods on the environment including road structures. Therefore this paper attempts to estimate return periods associated with huge storms of different magnitude from recorded historical rainfall depth using statistical method. The method of annual maxima was used to select data sets for the rainfall analysis. In the statistical method, the Type 1 extreme value (Gumbel), Log Normal, Log Pearson 3 distributions were all applied to the annual maximum series for Palapye area to produce IDF curves. The Kolmogorov-Smirnov test and Chi Squared were used to confirm the appropriateness of fitted distributions for the location and the data do fit the distributions used to predict expected frequencies. This will be a beneficial tool for urgent flood forecasting and water resource administration as proper drainage design will be design based on the estimated flood events and will help to reclaim and protect the road structures from adverse impacts of flood.

Keywords: drainage, estimate, evaluation, floods, flood forecasting

Procedia PDF Downloads 348
389 Characteristics of Pore Pressure and Effective Stress Changes in Sandstone Reservoir Due to Hydrocarbon Production

Authors: Kurniawan Adha, Wan Ismail Wan Yusoff, Luluan Almanna Lubis

Abstract:

Preventing hazardous events during oil and gas operation is an important contribution of accurate pore pressure data. The availability of pore pressure data also contribute in reducing the operation cost. Suggested methods in pore pressure estimation were mostly complex by the many assumptions and hypothesis used. Basic properties which may have significant impact on estimation model are somehow being neglected. To date, most of pore pressure determinations are estimated by data model analysis and rarely include laboratory analysis, stratigraphy study or core check measurement. Basically, this study developed a model that might be applied to investigate the changes of pore pressure and effective stress due to hydrocarbon production. In general, this paper focused velocity model effect of pore pressure and effective stress changes due to hydrocarbon production with illustrated by changes in saturation. The core samples from Miri field from Sarawak Malaysia ware used in this study, where the formation consists of sandstone reservoir. The study area is divided into sixteen (16) layers and encompassed six facies (A-F) from the outcrop that is used for stratigraphy sequence model. The experimental work was firstly involving data collection through field study and developing stratigraphy sequence model based on outcrop study. Porosity and permeability measurements were then performed after samples were cut into 1.5 inch diameter core samples. Next, velocity was analyzed using SONIC OYO and AutoLab 500. Three (3) scenarios of saturation were also conducted to exhibit the production history of the samples used. Results from this study show the alterations of velocity for different saturation with different actions of effective stress and pore pressure. It was observed that sample with water saturation has the highest velocity while dry sample has the lowest value. In comparison with oil to samples with oil saturation, water saturated sample still leads with the highest value since water has higher fluid density than oil. Furthermore, water saturated sample exhibits velocity derived parameters, such as poisson’s ratio and P-wave velocity over S-wave velocity (Vp/Vs) The result shows that pore pressure value ware reduced due to the decreasing of fluid content. The decreasing of pore pressure result may soften the elastic mineral frame and have tendency to possess high velocity. The alteration of pore pressure by the changes in fluid content or saturation resulted in alteration of velocity value that has proportionate trend with the effective stress.

Keywords: pore pressure, effective stress, production, miri formation

Procedia PDF Downloads 268
388 The Effect of Sea Buckthorn (Hippophae rhamnoides L.) Berries on Some Quality Characteristics of Cooked Pork Sausages

Authors: Anna M. Salejda, Urszula Tril, Grażyna Krasnowska

Abstract:

The aim of this study was to analyze selected quality characteristics of cooked pork sausages manufactured with the addition of Sea buckthorn (Hippophae rhamnoides L.) berries preparations. Stuffings of model sausages consisted of pork, backfat, water and additives such a curing salt and sodium isoascorbate. Functional additives used in production process were two preparations obtained from dried Sea buckthorn berries in form of powder and brew. Powder of dried berries was added in amount of 1 and 3 g, while water infusion as a replacement of 50 and 100% ice water included in meat products formula. Control samples were produced without functional additives. Experimental stuffings were heat treated in water bath and stored for 4 weeks under cooled conditions (4±1ºC). Physical parameters of colour, texture profile and technological parameters as acidity, weight losses and water activity were estimated. The effect of Sea buckthorn berries preparations on lipid oxidation during storage of final products was determine by TBARS method. Studies have shown that addition of Sea buckthorn preparations to meat-fatty batters significant (P≤0.05) reduced the pH values of sausages samples after thermal treatment. Moreover, the addition of berries powder caused significant differences (P ≤ 0.05) in weight losses after cooking process. Analysis of results of texture profile analysis indicated, that utilization of infusion prepared from Sea buckthorn dried berries caused increase of springiness, gumminess and chewiness of final meat products. At the same time, the highest amount of Sea buckthorn berries powder in recipe caused the decrease of all measured texture parameters. Utilization of experimental preparations significantly decreased (P≤0.05) lightness (L* parameter of color) of meat products. Simultaneously, introduction of 1 and 3 grams of Sea buckthorn berries powder to meat-fatty batter increased redness (a* parameter) of samples under investigation. Higher content of substances reacting with thiobarbituric acid was observed in meat products produced without functional additives. It was observed that powder of Sea buckthorn berries added to meat-fatty batters caused higher protection against lipid oxidation in cooked sausages.

Keywords: sea buckthorn, meat products, texture, color parameters, lipid oxidation

Procedia PDF Downloads 280
387 Familiarity with Flood and Engineering Solutions to Control It

Authors: Hamid Fallah

Abstract:

Undoubtedly, flood is known as a natural disaster, and in practice, flood is considered the most terrible natural disaster in the world both in terms of loss of life and financial losses. From 1988 to 1997, about 390,000 people were killed by natural disasters in the world, 58% of which were related to floods, 26% due to earthquakes, and 16% due to storms and other disasters. The total damages in these 10 years were about 700 billion dollars, which were 33, 29, 28% related to floods, storms and earthquakes, respectively. In this regard, the worrisome point has been the increasing trend of flood deaths and damages in the world in recent decades. The increase in population and assets in flood plains, changes in hydro systems and the destructive effects of human activities have been the main reasons for this increase. During rain and snow, some of the water is absorbed by the soil and plants. A percentage evaporates and the rest flows and is called runoff. Floods occur when the soil and plants cannot absorb the rainfall, and as a result, the natural river channel does not have the capacity to pass the generated runoff. On average, almost 30% of precipitation is converted into runoff, which increases with snow melting. Floods that occur differently create an area called flood plain around the river. River floods are often caused by heavy rains, which in some cases are accompanied by snow melt. A flood that flows in a river without warning or with little warning is called a flash flood. The casualties of these rapid floods that occur in small watersheds are generally more than the casualties of large river floods. Coastal areas are also subject to flooding caused by waves caused by strong storms on the surface of the oceans or waves caused by underground earthquakes. Floods not only cause damage to property and endanger the lives of humans and animals, but also leave other effects. Runoff caused by heavy rains causes soil erosion in the upstream and sedimentation problems in the downstream. The habitats of fish and other animals are often destroyed by floods. The high speed of the current increases the damage. Long-term floods stop traffic and prevent drainage and economic use of land. The supports of bridges, river banks, sewage outlets and other structures are damaged, and there is a disruption in shipping and hydropower generation. The economic losses of floods in the world are estimated at tens of billions of dollars annually.

Keywords: flood, hydrological engineering, gis, dam, small hydropower, suitablity

Procedia PDF Downloads 44
386 Transition from Linear to Circular Business Models with Service Design Methodology

Authors: Minna-Maari Harmaala, Hanna Harilainen

Abstract:

Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.

Keywords: business model innovation, circular economy, circular economy business models, service design

Procedia PDF Downloads 109
385 Long Term Survival after a First Transient Ischemic Attack in England: A Case-Control Study

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Transient ischaemic attacks (TIAs) are warning signs for future strokes. TIA patients are at increased risk of stroke and cardio-vascular events after a first episode. A majority of studies on TIA focused on the occurrence of these ancillary events after a TIA. Long-term mortality after TIA received only limited attention. We undertook this study to determine the long-term hazards of all-cause mortality following a first episode of a TIA using anonymised electronic health records (EHRs). We used a retrospective case-control study using electronic primary health care records from The Health Improvement Network (THIN) database. Patients born prior to or in year 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general medical practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Weibull-Cox survival model which included both scale and shape effects and a random frailty effect of GP practice. 20,633 cases and 58,634 controls were included. Cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to matched controls (HR = 3.04, 95% CI (2.91 - 3.18)). The HRs for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to matched controls. Aspirin provided long-term survival benefits to cases. Cases aged 39-60 years on aspirin had HR of 0.93 (0.84 - 1.00), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5 years, 10 years and 15 years, respectively, compared to cases in the same age group who were not on antiplatelets. Similar beneficial effects of aspirin were observed in other age groups. There were no significant survival benefits with other antiplatelet options. No survival benefits of antiplatelet drugs were observed in controls. Our study highlights the excess long-term risk of death of TIA patients and cautions that TIA should not be treated as a benign condition. The study further recommends aspirin as the better option for secondary prevention for TIA patients compared to clopidogrel recommended by NICE guidelines. Management of risk factors and treatment strategies should be important challenges to reduce the burden of disease.

Keywords: dual antiplatelet therapy (DAPT), General Practice, Multiple Imputation, The Health Improvement Network(THIN), hazard ratio (HR), Weibull-Cox model

Procedia PDF Downloads 125
384 Burden of Dengue in Northern India

Authors: Ashutosh Biswas, Poonam Coushic, Kalpana Baruah, Paras Singla, A. C. Dhariwal, Pawana Murthy

Abstract:

Burden of Dengue in Northern India Ashutosh Biswas, Poonam Coushic, Kalpana Baruah, Paras Singla, AC Dhariwal, Pawana Murthy. All India Institute of Medical Sciences, NVBDCP,WHO New Delhi, India Aim: This study was conducted to estimate the burden of dengue in capital region of India. Methodology:Seropositivity of Dengue for IgM Ab, NS1 Ag and IgG Ab were performed among the blood donors’ samples from blood bank, those who were coming to donate blood for the requirement of blood for the admitted patients in hospital. Blood samplles were collected through out the year to estimate seroprevalance of dengue with or without outbreak season. All the subjects were asymptomatic at the time of blood donation. Results: A total of 1558 donors were screened for the study. On the basis of inclusion/ exclusion criteria, we enrolled 1531subjects for the study.Twenty seven donors were excluded from the study, out of which 6 were detected HIV +ve, 11 were positive for HBsAg and 10 were found positive for HCV.Mean age was 30.51 ± 7.75 years.Of 1531subjects, 18 (1.18%) had a past history of typhoid fever, 28 (1.83%) had chikungunya fever, 9 (0.59%) had malaria and 43 subjects (2.81%) had a past history of symptomatic dengue infection.About 2.22% (34) of subjects were found to have sero-positive for NS1 Ag with a peak point prevalence of 7.14% in the month of October and sero-positive of IgM Ab was observed about 5.49% (84)with a peak point prevalence of 14.29% in the month of October. Sero-prevalnce of IgGwas detected in about 64.21% (983) of subjects. Conclusion: Acute asymptomatic dengue (NS1 Ag+ve) was observed in 7.14%, as the subjects were having no symptoms at the time of sampling. This group of subjects poses a potential public health threat for transmitting dengue infection through blood transfusion (TTI) in the community as evident by presence of active viral infection due to NS1Ag +VE. Therefore a policy may be implemented in the blood bank for testing NS1 Ag to look for active dengue infection for preventing dengue transmission through blood transfusion (TTI). Acute or Subacute dengue infection ( IgM Ab+ve) was observed from 5.49% to 14.29% which is a peak point prevalence in the month of October. About 64.21% of the population were immunized by natural dengue infection ( IgG Ab+ve) in theNorthern province of India. This might be helpful for implementing the dengue vaccine in a region. Blood samples in blood banks should be tested for dengue before transfusion to any other person to prevent transfusion transmitted dengue infection as we estimated upto 7.14% positivity of NS1 Ag in our study which indicates presence of dengue virus in blood donors’ samples.

Keywords: Dengue Burden, Seroprevalance, Asymptomatic dengue, Dengue transmission through blood transfusion

Procedia PDF Downloads 126
383 Enhanced Photocatalytic Activities of TiO2/Ag2O Heterojunction Nanotubes Arrays Obtained by Electrochemical Method

Authors: Magdalena Diaka, Paweł Mazierski, Joanna Żebrowska, Michał Winiarski, Tomasz Klimczuk, Adriana Zaleska-Medynska

Abstract:

During the last years, TiO2 nanotubes have been widely studied due to their unique highly ordered array structure, unidirectional charge transfer and higher specific surface area compared to conventional TiO2 powder. These photoactive materials, in the form of thin layer, can be activated by low powered and low cost irradiation sources (such as LEDs) to remove VOCs, microorganism and to deodorize air streams. This is possible due to their directly growth on a support material and high surface area, which guarantee enhanced photon absorption together with an extensive adsorption of reactant molecules on the photocatalyst surface. TiO2 nanotubes exhibit also lots of other attractive properties, such as potential enhancement of electron percolation pathways, light conversion, and ion diffusion at the semiconductor-electrolyte interface. Pure TiO2 nanotubes were previously used to remove organic compounds from the gas phase as well as in water splitting reaction. The major factors limiting the use of TiO2 nanotubes, which have not been fully overcome, are their relatively large band gap (3-3,2 eV) and high recombination rate of photogenerated electron–hole pairs. Many different strategies were proposed to solve this problem, however titania nanostructures containing incorporated metal oxides like Ag2O shows very promising, new optical and photocatalytic properties. Unfortunately, there is still very limited number of reports regarding application of TiO2/MxOy nanostructures. In the present work, we prepared TiO2/Ag2O nanotubes obtained by anodization of Ti-Ag alloys containing 5, 10 and 15 wt. % Ag. Photocatalysts prepared in this way were characterized by X-ray diffraction spectroscopy (XRD), scanning electron microscopy (SEM), luminescence spectroscopy and UV-Vis spectroscopy. The activities of new TiO2/Ag2O were examined by photocatalytic degradation of toluene in gas phase reaction and phenol in aqueous phase using 1000 W Xenon lamp (Oriel) and light emitting diodes (LED) as a irradiation sources. Additionally efficiency of bacteria (Pseudomonas aeruginosa) removal from the gas phase was estimated. The number of surviving bacteria was determined by the serial twofold dilution microtiter plate method, in Tryptic Soy Broth medium (TSB, GibcoBRL).

Keywords: photocatalysis, antibacterial properties, titania nanotubes, new TiO2/MxOy nanostructures

Procedia PDF Downloads 276
382 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa

Authors: Dimakatso Machetele, Kowiyou Yessoufou

Abstract:

Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.

Keywords: road accidents, South Africa, statistical modelling, trends

Procedia PDF Downloads 141
381 Other Cancers in Patients With Head and Neck Cancer

Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern

Abstract:

Introduction: Head and neck cancers (HNC) are often associated with the development of non-HNC primaries, as the risk factors that predispose patients to HNC are often risk factors for other cancers. Aim: We sought to evaluate whether there was an increased risk of smoking and alcohol-related cancers and also other cancers in HNC patients and to evaluate whether there is a difference between the rates of non-HNC primaries in Aboriginal compared with non-Aboriginal HNC patients. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single center in Western Australia, identifying 80 Aboriginal and 240 non-Aboriginal patients matched on a 1:3 ratio by sites, histology, rurality, and age. We collected data on the patient characteristics, tumour features, treatments, outcomes, and past and subsequent HNCs and non-HNC primaries. Results: In the overall study population, there were 86 patients (26.9%) with a metachronous or synchronous non-HNC primary. Non-HNC primaries were actually significantly more common in the non-Aboriginal population compared with the Aboriginal population (30% vs. 17.5%, p=0.02); however, half of these were patients with cutaneous squamous or basal cell carcinomas (cSCC/BCC) only. When cSCC/BCCs were excluded, non-Aboriginal patients had a similar rate as Aboriginal patients (16.7% vs. 15%, p=0.73). There were clearly more cSCC/BCCs in non-Aboriginal patients compared with Aboriginal patients (16.7% vs. 2.5%, p=0.001) and more patients with melanoma (2.5% vs. 0%, p value not significant (p=NS). Rates of most cancers were similar between non-Aboriginal and Aboriginal patients, including prostate (2.9% vs. 3.8%), colorectal (2.9% vs. 2.5%), kidney (1.2% vs. 1.2%), and these rates appeared comparable to Australian Age Standardised Incidence Rates (ASIR) in the general community. Oesophageal cancer occurred at double the rate in Aboriginal patients (3.8%) compared with non-Aboriginal patients (1.7%), which was far in excess of ASIRs which estimated a lifetime risk of 0.59% in the general population. Interestingly lung cancer rates did not appear to be significantly increased in our cohort, with 2.5% of Aboriginal patients and 3.3% of non-Aboriginal patients having lung cancer, which is in line with ASIRs which estimates a lifetime risk of 5% (by age 85yo). Interestingly the rate of Glioma in the non-Aboriginal population was higher than the ASIR, with 0.8% of non-Aboriginal patients developing Glioma, with Australian averages predicting a 0.6% lifetime risk in the general population. As these are small numbers, this finding may well be due to chance. Unsurprisingly, second HNCs occurred at an increased incidence in our cohort, in 12.5% of Aboriginal patients and 11.2% of non-Aboriginal patients, compared to an ASIR of 17 cases per 100,000 persons, estimating a lifetime risk of 1.70%. Conclusions: Overall, 26.9% of patients had a non-HNC primary. When cSCC/BCCs were excluded, Aboriginal and non-Aboriginal patients had similar rates of non-HNC primaries, although non-Aboriginal patients had a significantly higher rate of cSCC/BCCs. Aboriginal patients had double the rate of oesophageal primaries; however, this was not statistically significant, possibly due to small case numbers.

Keywords: head and neck cancer, synchronous and metachronous primaries, other primaries, Aboriginal

Procedia PDF Downloads 51
380 Formulation and Characterization of Antimicrobial Herbal Mouthwash from Some Herbal Extracts for Treatment of Periodontal Diseases

Authors: Reenu Yadav, Abhay Asthana, S. K. Yadav

Abstract:

Purpose: The aim of the present work was to develop an oral gel for brushing with an antimicrobial activity which will cure/protect from various periodontal diseases such as periodontitis, gingivitis, and pyorrhea. Methods: Plant materials procured from local suppliers, extracted and standardized. Screening of antimicrobial activity was carried out with the help of disk diffusion method. The gel was formulated by dried extracts of Beautea monosperma and Cordia obliquus. Gels were evaluated on various parameters and standardization of the formulation was performed. The release of drugs was studied in pH 6.8 using a mastication device.Total phenolic and flavonoid contents were estimated by folin-Ciocalteu and aluminium chloride method, and stability studies were performed (40°C and RH 75% ± 5% for 90 days) to assess the effect of temperature and humidity on the concentration of phenolic and flavonoid contents. The results of accelerated stability conditions were compared with that of samples kept at controlled conditions (RT). The control samples were kept at room temperature (25°C, 35% RH for 180 days). Results: Results are encouraging; extracts possess significant antimicrobial activity at very low concentration (15µg/disc, 20µg/disc and 15 µg/ disc) on oral pathogenic bacteria. The formulation has optimal characteristics, as well as has a pleasant appearance, fragrance, texture, and taste, is highly acceptable by the volunteers. The diffusion coefficient values ranged from 0.6655 to 0.9164. Since the R values of korsmayer papas were close to 1, Drug release from formulation follows matrix diffusion kinetics. Hence, diffusion was the mechanism of the drug release. Formulation follows non-Fickian transport mechanism. Most Formulations released 50 % of their contents within 25-30 minutes. Results obtained from the accelerated stability studies are indicative of a slight reduction in flavonoids and phenolic contents with time on long time storage. When measured degradation under ambient conditions, degradation was significantly lower than in accelerated stability study. Conclusion: Plant extracts possess compounds with antimicrobial properties can be used as. Developed formulation will cure/protect from various periodontal diseases. Further development and evaluations oral gel including the isolated compounds on the commercial scale and their clinical and toxicological studies are the future challenges.

Keywords: herbal gel, dental care, ambient conditions, commercial scale

Procedia PDF Downloads 424
379 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 108
378 Estimating Affected Croplands and Potential Crop Yield Loss of an Individual Farmer Due to Floods

Authors: Shima Nabinejad, Holger Schüttrumpf

Abstract:

Farmers who are living in flood-prone areas such as coasts are exposed to storm surges increased due to climate change. Crop cultivation is the most important economic activity of farmers, and in the time of flooding, agricultural lands are subject to inundation. Additionally, overflow saline water causes more severe damage outcomes than riverine flooding. Agricultural crops are more vulnerable to salinity than other land uses for which the economic damages may continue for a number of years even after flooding and affect farmers’ decision-making for the following year. Therefore, it is essential to assess what extent the agricultural areas are flooded and how much the associated flood damage to each individual farmer is. To address these questions, we integrated farmers’ decision-making at farm-scale with flood risk management. The integrated model includes identification of hazard scenarios, failure analysis of structural measures, derivation of hydraulic parameters for the inundated areas and analysis of the economic damages experienced by each farmer. The present study has two aims; firstly, it attempts to investigate the flooded cropland and potential crop damages for the whole area. Secondly, it compares them among farmers’ field for three flood scenarios, which differ in breach locations of the flood protection structure. To achieve its goal, the spatial distribution of fields and cultivated crops of farmers were fed into the flood risk model, and a 100-year storm surge hydrograph was selected as the flood event. The study area was Pellworm Island that is located in the German Wadden Sea National Park and surrounded by North Sea. Due to high salt content in seawater of North Sea, crops cultivated in the agricultural areas of Pellworm Island are 100% destroyed by storm surges which were taken into account in developing of depth-damage curve for analysis of consequences. As a result, inundated croplands and economic damages to crops were estimated in the whole Island which was further compared for six selected farmers under three flood scenarios. The results demonstrate the significance and the flexibility of the proposed model in flood risk assessment of flood-prone areas by integrating flood risk management and decision-making.

Keywords: crop damages, flood risk analysis, individual farmer, inundated cropland, Pellworm Island, storm surges

Procedia PDF Downloads 239
377 An Examination of Earnings Management by Publicly Listed Targets Ahead of Mergers and Acquisitions

Authors: T. Elrazaz

Abstract:

This paper examines accrual and real earnings management by publicly listed targets around mergers and acquisitions. Prior literature shows that earnings management around mergers and acquisitions can have a significant economic impact because of the associated wealth transfers among stakeholders. More importantly, acting on behalf of their shareholders or pursuing their self-interests, managers of both targets and acquirers may be equally motivated to manipulate earnings prior to an acquisition to generate higher gains for their shareholders or themselves. Building on the grounds of information asymmetry, agency conflicts, stewardship theory, and the revelation principle, this study addresses the question of whether takeover targets employ accrual and real earnings management in the periods prior to the announcement of Mergers and Acquisitions (M&A). Additionally, this study examines whether acquirers are able to detect targets’ earnings management, and in response, adjust the acquisition premium paid in order not to face the risk of overpayment. This study uses an aggregate accruals approach in estimating accrual earnings management as proxied by estimated abnormal accruals. Additionally, real earnings management is proxied for by employing widely used models in accounting and finance literature. The results of this study indicate that takeover targets manipulate their earnings using accruals in the second year with an earnings release prior to the announcement of the M&A. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals. These results are consistent with the argument that targets of cash-only or mixed-payment deals do not have the same strong motivations to manage their earnings as their stock-financed deals counterparts do additionally supporting the findings of prior studies that the method of payment in takeovers is value relevant. The findings of this study also indicate that takeover targets manipulate earnings upwards through cutting discretionary expenses the year prior to the acquisition while they do not do so by manipulating sales or production costs. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals, providing further robustness to the results derived under the accrual-based models. Finally, this study finds evidence suggesting that acquirers are fully aware of the accrual-based techniques employed by takeover targets and can unveil such manipulation practices. These results are robust to alternative accrual and real earnings management proxies, as well as controlling for the method of payment in the deal.

Keywords: accrual earnings management, acquisition premium, real earnings management, takeover targets

Procedia PDF Downloads 96
376 Comparative Comparison (Cost-Benefit Analysis) of the Costs Caused by the Earthquake and Costs of Retrofitting Buildings in Iran

Authors: Iman Shabanzadeh

Abstract:

Earthquake is known as one of the most frequent natural hazards in Iran. Therefore, policy making to improve the strengthening of structures is one of the requirements of the approach to prevent and reduce the risk of the destructive effects of earthquakes. In order to choose the optimal policy in the face of earthquakes, this article tries to examine the cost of financial damages caused by earthquakes in the building sector and compare it with the costs of retrofitting. In this study, the results of adopting the scenario of "action after the earthquake" and the policy scenario of "strengthening structures before the earthquake" have been collected, calculated and finally analyzed by putting them together. Methodologically, data received from governorates and building retrofitting engineering companies have been used. The scope of the study is earthquakes occurred in the geographical area of Iran, and among them, eight earthquakes have been specifically studied: Miane, Ahar and Haris, Qator, Momor, Khorasan, Damghan and Shahroud, Gohran, Hormozgan and Ezgole. The main basis of the calculations is the data obtained from retrofitting companies regarding the cost per square meter of building retrofitting and the data of the governorate regarding the power of earthquake destruction, the realized costs for the reconstruction and construction of residential units. The estimated costs have been converted to the value of 2021 using the time value of money method to enable comparison and aggregation. The cost-benefit comparison of the two policies of action after the earthquake and retrofitting before the earthquake in the eight earthquakes investigated shows that the country has suffered five thousand billion Tomans of losses due to the lack of retrofitting of buildings against earthquakes. Based on the data of the Budget Law's of Iran, this figure was approximately twice the budget of the Ministry of Roads and Urban Development and five times the budget of the Islamic Revolution Housing Foundation in 2021. The results show that the policy of retrofitting structures before an earthquake is significantly more optimal than the competing scenario. The comparison of the two policy scenarios examined in this study shows that the policy of retrofitting buildings before an earthquake, on the one hand, prevents huge losses, and on the other hand, by increasing the number of earthquake-resistant houses, it reduces the amount of earthquake destruction. In addition to other positive effects of retrofitting, such as the reduction of mortality due to earthquake resistance of buildings and the reduction of other economic and social effects caused by earthquakes. These are things that can prove the cost-effectiveness of the policy scenario of "strengthening structures before earthquakes" in Iran.

Keywords: disaster economy, earthquake economy, cost-benefit analysis, resilience

Procedia PDF Downloads 36
375 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 145
374 A Lower Dose of Topiramate with Enough Antiseizure Effect: A Realistic Therapeutic Range of Topiramate

Authors: Seolah Lee, Yoohyk Jang, Soyoung Lee, Kon Chu, Sang Kun Lee

Abstract:

Objective: The International League Against Epilepsy (ILAE) currently suggests a topiramate serum level range of 5-20 mg/L. However, numerous institutions have observed substantial drug response at lower levels. This study aims to investigate the correlation between topiramate serum levels, drug responsiveness, and adverse events to establish a more accurate and tailored therapeutic range. Methods: We retrospectively analyzed topiramate serum samples collected between January 2017 and January 2022 at Seoul National University Hospital. Clinical data, including serum levels, antiseizure regimens, seizure frequency, and adverse events, were collected. Patient responses were categorized as "insufficient" (reduction in seizure frequency <50%) or "sufficient" (reduction ≥ 50%). Within the "sufficient" group, further subdivisions included seizure-free and tolerable seizure subgroups. A population pharmacokinetic model estimated serum levels from spot measurements. ROC curve analysis determined the optimal serum level cut-off. Results: A total of 389 epilepsy patients, with 555 samples, were reviewed, having a mean dose of 178.4±117.9 mg/day and a serum level of 3.9±2.8 mg/L. Out of the samples, only 5.6% (n=31) exhibited insufficient response, with a mean serum level of 3.6±2.5 mg/L. In contrast, 94.4% (n=524) of samples demonstrated sufficient response, with a mean serum level of 4.0±2.8 mg/L. This difference was not statistically significant (p = 0.45). Among the 78 reported adverse events, logistic regression analysis identified a significant association between ataxia and serum concentration (p = 0.04), with an optimal cut-off value of 6.5 mg/L. In the subgroup of patients receiving monotherapy, those in the tolerable seizure group exhibited a significantly higher serum level compared to the seizure-free group (4.8±2.0 mg/L vs 3.4±2.3 mg/L, p < 0.01). Notably, patients in the tolerable seizure group displayed a higher likelihood of progressing into drug-resistant epilepsy during follow-up visits compared to the seizure-free group. Significance: This study proposed an optimal therapeutic concentration for topiramate based on the patient's responsiveness to the drug and the incidence of adverse effects. We employed a population pharmacokinetic model and analyzed topiramate serum levels to recommend a serum level below 6.5 mg/L to mitigate the risk of ataxia-related side effects. Our findings also indicated that topiramate dose elevation is unnecessary for suboptimal responders, as the drug's effectiveness plateaus at minimal doses.

Keywords: topiramate, therapeutic range, low dos, antiseizure effect

Procedia PDF Downloads 39
373 A Comparative Analysis of an All-Optical Switch Using Chalcogenide Glass and Gallium Arsenide Based on Nonlinear Photonic Crystal

Authors: Priyanka Kumari Gupta, Punya Prasanna Paltani, Shrivishal Tripathi

Abstract:

This paper proposes a nonlinear photonic crystal ring resonator-based all-optical 2 × 2 switch. The nonlinear Kerr effect is used to evaluate the essential 2 x 2 components of the photonic crystal-based optical switch, including the bar and cross states. The photonic crystal comprises a two-dimensional square lattice of dielectric rods in an air background. In the background air, two different dielectric materials are used for this comparison study separately. Initially with chalcogenide glass rods, then with GaAs rods. For both materials, the operating wavelength, bandgap diagram, operating power intensities, and performance parameters, such as the extinction ratio, insertion loss, and cross-talk of an optical switch, have also been estimated using the plane wave expansion and the finite-difference time-domain method. The chalcogenide glass material (Ag20As32Se48) has a high refractive index of 3.1 which is highly suitable for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 9.1 x 10-17 m2/W. The resonance wavelength is at 1552 nm, with the operating power intensities at the cross-state and bar state around 60 W/μm2 and 690 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are 17.19 dB, 0.051 dB, and -17.14 dB, and the bar state, the values are 11.32 dB, 0.025 dB, and -11.35 dB respectively. The gallium arsenide (GaAs) dielectric material has a high refractive index of 3.4, a direct bandgap semiconductor material highly preferred nowadays for switching operations. This dielectric material is immersed in an air background with a nonlinear Kerr coefficient of 3.1 x 10-16 m2/W. The resonance wavelength is at 1558 nm, with the operating power intensities at the cross-state and bar state around 110 W/μm2 and 200 W/μm2. The extinction ratio, insertion loss, and cross-talk value for the chalcogenide glass at the cross-state are found to be 3.36.19 dB, 2.436 dB, and -5.8 dB, and for the bar state, the values are 15.60 dB, 0.985 dB, and -16.59 dB respectively. This paper proposes an all-optical 2 × 2 switch based on a nonlinear photonic crystal using a ring resonator. The two-dimensional photonic crystal comprises a square lattice of dielectric rods in an air background. The resonance wavelength is in the range of photonic bandgap. Later, another widely used material, GaAs, is also considered, and its performance is compared with the chalcogenide glass. Our presented structure can be potentially applicable in optical integration circuits and information processing.

Keywords: photonic crystal, FDTD, ring resonator, optical switch

Procedia PDF Downloads 62
372 Network Analysis to Reveal Microbial Community Dynamics in the Coral Reef Ocean

Authors: Keigo Ide, Toru Maruyama, Michihiro Ito, Hiroyuki Fujimura, Yoshikatu Nakano, Shoichiro Suda, Sachiyo Aburatani, Haruko Takeyama

Abstract:

Understanding environmental system is one of the important tasks. In recent years, conservation of coral environments has been focused for biodiversity issues. The damage of coral reef under environmental impacts has been observed worldwide. However, the casual relationship between damage of coral and environmental impacts has not been clearly understood. On the other hand, structure/diversity of marine bacterial community may be relatively robust under the certain strength of environmental impact. To evaluate the coral environment conditions, it is necessary to investigate relationship between marine bacterial composition in coral reef and environmental factors. In this study, the Time Scale Network Analysis was developed and applied to analyze the marine environmental data for investigating the relationship among coral, bacterial community compositions and environmental factors. Seawater samples were collected fifteen times from November 2014 to May 2016 at two locations, Ishikawabaru and South of Sesoko in Sesoko Island, Okinawa. The physicochemical factors such as temperature, photosynthetic active radiation, dissolved oxygen, turbidity, pH, salinity, chlorophyll, dissolved organic matter and depth were measured at the coral reef area. Metagenome and metatranscriptome in seawater of coral reef were analyzed as the biological factors. Metagenome data was used to clarify marine bacterial community composition. In addition, functional gene composition was estimated from metatranscriptome. For speculating the relationships between physicochemical and biological factors, cross-correlation analysis was applied to time scale data. Even though cross-correlation coefficients usually include the time precedence information, it also included indirect interactions between the variables. To elucidate the direct regulations between both factors, partial correlation coefficients were combined with cross correlation. This analysis was performed against all parameters such as the bacterial composition, the functional gene composition and the physicochemical factors. As the results, time scale network analysis revealed the direct regulation of seawater temperature by photosynthetic active radiation. In addition, concentration of dissolved oxygen regulated the value of chlorophyll. Some reasonable regulatory relationships between environmental factors indicate some part of mechanisms in coral reef area.

Keywords: coral environment, marine microbiology, network analysis, omics data analysis

Procedia PDF Downloads 237
371 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.

Keywords: apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization

Procedia PDF Downloads 120
370 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation

Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau

Abstract:

In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.

Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa

Procedia PDF Downloads 127
369 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 171
368 Response of Caldeira De Tróia Saltmarsh to Sea Level Rise, Sado Estuary, Portugal

Authors: A. G. Cunha, M. Inácio, M. C. Freitas, C. Antunes, T. Silva, C. Andrade, V. Lopes

Abstract:

Saltmarshes are essential ecosystems both from an ecological and biological point of view. Furthermore, they constitute an important social niche, providing valuable economic and protection functions. Thus, understanding their rates and patterns of sedimentation is critical for functional management and rehabilitation, especially in an SLR scenario. The Sado estuary is located 40 km south of Lisbon. It is a bar built estuary, separated from the sea by a large sand spit: the Tróia barrier. Caldeira de Tróia is located on the free edge of this barrier, and encompasses a salt marsh with ca. 21,000 m². Sediment cores were collected in the high and low marshes and in the mudflat area of the North bank of Caldeira de Tróia. From the low marsh core, fifteen samples were chosen for ²¹⁰Pb and ¹³⁷Cs determination at University of Geneva. The cores from the high marsh and the mudflat are still being analyzed. A sedimentation rate of 2.96 mm/year was derived from ²¹⁰Pb using the Constant Flux Constant Sedimentation model. The ¹³⁷Cs profile shows a peak in activity (1963) between 15.50 and 18.50 cm, giving a 3.1 mm/year sedimentation rate for the past 53 years. The adopted sea level rise scenario was based on a model built with the initial rate of SLR of 2.1 mm/year in 2000 and an acceleration of 0.08 mm/year². Based on the harmonic analysis of Setubal-Tróia tide gauge of 2005 data, the tide model was estimated and used to build the tidal tables to the period 2000-2016. With these tables, the average mean water levels were determined for the same time span. A digital terrain model was created from LIDAR scanning with 2m horizontal resolution (APA-DGT, 2011) and validated with altimetric data obtained with a DGPS-RTK. The response model calculates a new elevation for each pixel of the DTM for 2050 and 2100 based on the sedimentation rates specific of each environment. At this stage, theoretical values were chosen for the high marsh and the mudflat (respectively, equal and double the low marsh rate – 2.92 mm/year). These values will be rectified once sedimentation rates are determined for the other environments. For both projections, the total surface of the marsh decreases: 2% in 2050 and 61% in 2100. Additionally, the high marsh coverage diminishes significantly, indicating a regression in terms of maturity.

Keywords: ¹³⁷Cs, ²¹⁰Pb, saltmarsh, sea level rise, response model

Procedia PDF Downloads 231
367 Glycosaminoglycan, a Cartilage Erosion Marker in Synovial Fluid of Osteoarthritis Patients Strongly Correlates with WOMAC Function Subscale

Authors: Priya Kulkarni, Soumya Koppikar, Narendrakumar Wagh, Dhanshri Ingle, Onkar Lande, Abhay Harsulkar

Abstract:

Cartilage is an extracellular matrix composed of aggrecan, which imparts it with a great tensile strength, stiffness and resilience. Disruption in cartilage metabolism leading to progressive degeneration is a characteristic feature of Osteoarthritis (OA). The process involves enzymatic depolymerisation of cartilage specific proteoglycan, releasing free glycosaminoglycan (GAG). This released GAG in synovial fluid (SF) of knee joint serves as a direct measure of cartilage loss, however, limited due to its invasive nature. Western Ontario and McMaster Universities Arthritis Index (WOMAC) is widely used for assessing pain, stiffness and physical-functions in OA patients. The scale is comprised of three subscales namely, pain, stiffness and physical-function, intends to measure patient’s perspective of disease severity as well as efficacy of prescribed treatment. Twenty SF samples obtained from OA patients were analysed for their GAG values in SF using DMMB based assay. LK 1.0 vernacular version was used to attain WOMAC scale. The results were evaluated using SAS University software (Edition 1.0) for statistical significance. All OA patients revealed higher GAG values compared to the control value of 78.4±30.1µg/ml (obtained from our non-OA patients). Average WOMAC calculated was 51.3 while pain, stiffness and function estimated were 9.7, 3.9 and 37.7, respectively. Interestingly, a strong statistical correlation was established between WOMAC function subscale and GAG (p = 0.0102). This subscale is based on day-to-day activities like stair-use, bending, walking, getting in/out of car, rising from bed. However, pain and stiffness subscale did not show correlation with any of the studied markers and endorsed the atypical inflammation in OA pathology. On one side, where knee pain showed poor correlation with GAG, it is often noted that radiography is insensitive to cartilage degenerative changes; thus OA remains undiagnosed for long. Moreover, active cartilage degradation phase remains elusive to both, patient and clinician. Through analysis of large number of OA patients we have established a close association of Kellgren-Lawrence grades and increased cartilage loss. A direct attempt to correlate WOMAC and radiographic progression of OA with various biomarkers has not been attempted so far. We found a good correlation in GAG levels in SF and the function subscale.

Keywords: cartilage, Glycosaminoglycan, synovial fluid, western ontario and McMaster Universities Arthritis Index

Procedia PDF Downloads 425
366 Functionalization of Carbon-Coated Iron Nanoparticles with Fluorescent Protein

Authors: A. G. Pershina, P. S. Postnikov, M. E. Trusova, D. O. Burlakova, A. E. Sazonov

Abstract:

Invention of magnetic-fluorescent nanocomposites is a rapidly developing area of research. The magnetic-fluorescent nanocomposite attractiveness is connected with the ability of simultaneous management and control of such nanocomposites by two independent methods based on different physical principles. These nanocomposites are applied for the solution of various essential scientific and experimental biomedical problems. The aim of this research is development of principle approach to nanobiohybrid structures with magnetic and fluorescent properties design. The surface of carbon-coated iron nanoparticles (Fe@C) were covalently modified by 4-carboxy benzenediazonium tosylate. Recombinant fluorescent protein TagGFP2 (Eurogen) was obtained in E. coli (Rosetta DE3) by standard laboratory techniques. Immobilization of TagGFP2 on the nanoparticles surface was provided by the carbodiimide activation. The amount of COOH-groups on the nanoparticle surface was estimated by elemental analysis (Elementar Vario Macro) and TGA-analysis (SDT Q600, TA Instruments. Obtained nanocomposites were analyzed by FTIR spectroscopy (Nicolet Thermo 5700) and fluorescence microscopy (AxioImager M1, Carl Zeiss). Amount of the protein immobilized on the modified nanoparticle surface was determined by fluorimetry (Cary Eclipse) and spectrophotometry (Unico 2800) with the help of preliminary obtained calibration plots. In the FTIR spectra of modified nanoparticles the adsorption band of –COOH group around 1700 cm-1 and bands in the region of 450-850 cm-1 caused by bending vibrations of benzene ring were observed. The calculated quantity of active groups on the surface was equal to 0,1 mmol/g of material. The carbodiimide activation of COOH-groups on nanoparticles surface results to covalent immobilization of TagGFP2 fluorescent protein (0.2 nmol/mg). The success of immobilization was proved by FTIR spectroscopy. Protein characteristic adsorption bands in the region of 1500-1600 cm-1 (amide I) were presented in the FTIR spectrum of nanocomposite. The fluorescence microscopy analysis shows that Fe@C-TagGFP2 nanocomposite possesses fluorescence properties. This fact confirms that TagGFP2 protein retains its conformation due to immobilization on nanoparticles surface. Magnetic-fluorescent nanocomposite was obtained as a result of unique design solution implementation – the fluorescent protein molecules were fixed to the surface of superparamagnetic carbon-coated iron nanoparticles using original diazonium salts.

Keywords: carbon-coated iron nanoparticles, diazonium salts, fluorescent protein, immobilization

Procedia PDF Downloads 324
365 Development and Validation of a Semi-Quantitative Food Frequency Questionnaire for Use in Urban and Rural Communities of Rwanda

Authors: Phenias Nsabimana, Jérôme W. Some, Hilda Vasanthakaalam, Stefaan De Henauw, Souheila Abbeddou

Abstract:

Tools for the dietary assessment in adults are limited in low- and middle-income settings. The objective of this study was to develop and validate a semi-quantitative food frequency questionnaire (FFQ) against the multiple pass-24 h recall tool for use in urban and rural Rwanda. A total of 212 adults (154 females and 58 males), 18-49 aged, including 105 urban and 107 rural residents, from the four regions of Rwanda, were recruited in the present study. A multiple-pass 24- H recall technique was used to collect dietary data in both urban and rural areas in four different rounds, on different days (one weekday and one weekend day), separated by a period of three months, from November 2020 to October 2021. The details of all the foods and beverages consumed over the 24h period of the day prior to the interview day were collected during face-to-face interviews. A list of foods, beverages, and commonly consumed recipes was developed by the study researchers and ten research assistants from the different regions of Rwanda. Non-standard recipes were collected when the information was available. A single semi-quantitative FFQ was also developed in the same group discussion prior to the beginning of the data collection. The FFQ was collected at the beginning and the end of the data collection period. Data were collected digitally. The amount of energy and macro-nutrients contributed by each food, recipe, and beverage will be computed based on nutrient composition reported in food composition tables and weight consumed. Median energy and nutrient contents of different food intakes from FFQ and 24-hour recalls and median differences (24-hour recall –FFQ) will be calculated. Kappa, Spearman, Wilcoxon, and Bland-Altman plot statistics will be conducted to evaluate the correlation between estimated nutrient and energy intake found by the two methods. Differences will be tested for their significance and all analyses will be done with STATA 11. Data collection was completed in November 2021. Data cleaning is ongoing and the data analysis is expected to be completed by July 2022. A developed and validated semi-quantitative FFQ will be available for use in dietary assessment. The developed FFQ will help researchers to collect reliable data that will support policy makers to plan for proper dietary change intervention in Rwanda.

Keywords: food frequency questionnaire, reproducibility, 24-H recall questionnaire, validation

Procedia PDF Downloads 119