Search results for: exponential smoothing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 470

Search results for: exponential smoothing

80 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management

Authors: Michael McCann

Abstract:

Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.

Keywords: corporate governance, boards of directors, agency theory, earnings management

Procedia PDF Downloads 205
79 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index

Procedia PDF Downloads 112
78 Nonstationary Modeling of Extreme Precipitation in the Wei River Basin, China

Authors: Yiyuan Tao

Abstract:

Under the impact of global warming together with the intensification of human activities, the hydrological regimes may be altered, and the traditional stationary assumption was no longer satisfied. However, most of the current design standards of water infrastructures were still based on the hypothesis of stationarity, which may inevitably result in severe biases. Many critical impacts of climate on ecosystems, society, and the economy are controlled by extreme events rather than mean values. Therefore, it is of great significance to identify the non-stationarity of precipitation extremes and model the precipitation extremes in a nonstationary framework. The Wei River Basin (WRB), located in a continental monsoon climate zone in China, is selected as a case study in this study. Six extreme precipitation indices were employed to investigate the changing patterns and stationarity of precipitation extremes in the WRB. To identify if precipitation extremes are stationary, the Mann-Kendall trend test and the Pettitt test, which is used to examine the occurrence of abrupt changes are adopted in this study. Extreme precipitation indices series are fitted with non-stationary distributions that selected from six widely used distribution functions: Gumbel, lognormal, Weibull, gamma, generalized gamma and exponential distributions by means of the time-varying moments model generalized additive models for location, scale and shape (GAMLSS), where the distribution parameters are defined as a function of time. The results indicate that: (1) the trends were not significant for the whole WRB, but significant positive/negative trends were still observed in some stations, abrupt changes for consecutive wet days (CWD) mainly occurred in 1985, and the assumption of stationarity is invalid for some stations; (2) for these nonstationary extreme precipitation indices series with significant positive/negative trends, the GAMLSS models are able to capture well the temporal variations of the indices, and perform better than the stationary model. Finally, the differences between the quantiles of nonstationary and stationary models are analyzed, which highlight the importance of nonstationary modeling of precipitation extremes in the WRB.

Keywords: extreme precipitation, GAMLSSS, non-stationary, Wei River Basin

Procedia PDF Downloads 100
77 Bacteriophage Lysis Of Physiologically Stressed Listeria Monocytogenes In A Simulated Seafood Processing Environment

Authors: Geevika J. Ganegama Arachchi, Steve H. Flint, Lynn McIntyre, Cristina D. Cruz, Beatrice M. Dias-Wanigasekera, Craig Billington, J. Andrew Hudson, Anthony N. Mutukumira

Abstract:

In seafood processing plants, Listeriamonocytogenes(L. monocytogenes)likely exists in a metabolically stressed state due to the nutrient-deficient environment, processing treatments such as heating, curing, drying, and freezing, and exposure to detergents and disinfectants. Stressed L. monocytogenes cells have been shown to be as pathogenic as unstressed cells. This study investigated lytic efficacy of (LiMN4L, LiMN4p, and LiMN17) which were previouslycharacterized as virulent against physiologically stressed cells of three seafood borne L. monocytogenesstrains (19CO9, 19DO3, and 19EO3).Physiologically compromised cells ofL. monocytogenesstrains were prepared by aging cultures in TrypticaseSoy Broth at 15±1°C for 72 h; heat injuringcultures at 54±1 - 55±1°C for 40 - 60 min;salt-stressing cultures in Milli-Q water were incubated at 25±1°C in darkness for three weeks; and incubating cultures in 9% (w/v) NaCl at 15±1°C for 72 h. Low concentrations of physiologically compromised cells of three L. monocytogenesstrainswere challenged in vitrowith high titre of three phages in separate experiments using Fish Broth medium (aqueous fish extract) at 15 °C in order to mimic the environment of seafood processing plant. Each phage, when present at ≈9 log10 PFU/ml, reduced late exponential phase cells of L. monocytogenes suspended in fish protein broth at ≈2-3 log10 CFU/ml to a non-detectable level (< 10 CFU/ml). Each phage, when present at ≈8.5 log10 PFU/ml, reduced both heat-injured cells present at 2.5-3.6 log10 CFU/ml and starved cells that were showed coccoid shape, present at ≈2-3 log10 CFU/ml to < 10 CFU/ml after 30 min. Phages also reduced salt-stressed cellspresent at ≈3 log10 CFU/ml by > 2 log10. L. monocytogenes (≈8 log10 CFU/ml) were reduced to below the detection limit (1 CFU/ml) by the three successive phage infections over 16 h, indicating that emergence of spontaneous phage resistance was infrequent. The three virulent phages showed high decontamination potential for physiologically stressed L. monocytogenes strains from seafood processing environments.

Keywords: physiologically stressed L. monocytogenes, heat injured, seafood processing environment, virulent phage

Procedia PDF Downloads 114
76 Photovoltaic Solar Energy in Public Buildings: A Showcase for Society

Authors: Eliane Ferreira da Silva

Abstract:

This paper aims to mobilize and sensitize public administration leaders to good practices and encourage investment in the PV system in Brazil. It presents a case study methodology for dimensioning the PV system in the roofs of the public buildings of the Esplanade of the Ministries, Brasilia, capital of the country, with predefined resources, starting with the Sustainable Esplanade Project (SEP), of the exponential growth of photovoltaic solar energy in the world and making a comparison with the solar power plant of the Ministry of Mines and Energy (MME), active since: 6/10/2016. In order to do so, it was necessary to evaluate the energy efficiency of the buildings in the period from January 2016 to April 2017, (16 months) identifying the opportunities to reduce electric energy expenses, through the adjustment of contracted demand, the tariff framework and correction of existing active energy. The instrument used to collect data on electric bills was the e-SIC citizen information system. The study considered in addition to the technical and operational aspects, the historical, cultural, architectural and climatic aspects, involved by several actors. Identifying the reductions of expenses, the study directed to the following aspects: Case 1) economic feasibility for exchanges of common lamps, for LED lamps, and, Case 2) economic feasibility for the implementation of photovoltaic solar system connected to the grid. For the case 2, PV*SOL Premium Software was used to simulate several possibilities of photovoltaic panels, analyzing the best performance, according to local characteristics, such as solar orientation, latitude, annual average solar radiation. A simulation of an ideal photovoltaic solar system was made, with due calculations of its yield, to provide a compensation of the energy expenditure of the building - or part of it - through the use of the alternative source in question. The study develops a methodology for public administration, as a major consumer of electricity, to act in a responsible, fiscalizing and incentive way in reducing energy waste, and consequently reducing greenhouse gases.

Keywords: energy efficiency, esplanade of ministries, photovoltaic solar energy, public buildings, sustainable building

Procedia PDF Downloads 111
75 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 300
74 Management of Acute Appendicitis with Preference on Delayed Primary Suturing of Surgical Incision

Authors: N. A. D. P. Niwunhella, W. G. R. C. K. Sirisena

Abstract:

Appendicitis is one of the most encountered abdominal emergencies worldwide. Proper clinical diagnosis and appendicectomy with minimal post operative complications are therefore priorities. Aim of this study was to ascertain the overall management of acute appendicitis in Sri Lanka in special preference to delayed primary suturing of the surgical site, comparing other local and international treatment outcomes. Data were collected prospectively from 155 patients who underwent appendicectomy following clinical and radiological diagnosis with ultrasonography. Histological assessment was done for all the specimens. All perforated appendices were managed with delayed primary closure. Patients were followed up for 28 days to assess complications. Mean age of patient presentation was 27 years; mean pre-operative waiting time following admission was 24 hours; average hospital stay was 72 hours; accuracy of clinical diagnosis of appendicitis as confirmed by histology was 87.1%; post operative wound infection rate was 8.3%, and among them 5% had perforated appendices; 4 patients had post operative complications managed without re-opening. There was no fistula formation or mortality reported. Current study was compared with previously published data: a comparison on management of acute appendicitis in Sri Lanka vs. United Kingdom (UK). The diagnosis of current study was equally accurate, but post operative complications were significantly reduced - (current study-9.6%, compared Sri Lankan study-16.4%; compared UK study-14.1%). During the recent years, there has been an exponential rise in the use of Computerised Tomography (CT) imaging in the assessment of patients with acute appendicitis. Even though, the diagnostic accuracy without using CT, and treatment outcome of acute appendicitis in this study match other local studies as well as with data compared to UK. Therefore CT usage has not increased the diagnostic accuracy of acute appendicitis significantly. Especially, delayed primary closure may have reduced post operative wound infection rate for ruptured appendices, therefore suggest this approach for further evaluation as a safer and an effective practice in other hospitals worldwide as well.

Keywords: acute appendicitis, computerised tomography, diagnostic accuracy, delayed primary closure

Procedia PDF Downloads 130
73 A Preliminary Randomized Controlled Trial of Pure L-Ascorbic Acid with Using a Needle-Free and Micro-Needle Mesotherapy in Treatment of Anti-Aging Procedure

Authors: M. Zasada, A. Markiewicz, A. Erkiert-Polguj, E. Budzisz

Abstract:

The epidermis is a keratinized stratified squamous epithelium covered by the hydro-lipid barrier. Therefore, active substances should be able to penetrate through this hydro-lipid coating. L-ascorbic acid is one of the vitamins which plays an important role in stimulation fibroblast to produce collagen type I and in hyperpigmentation lightening. Vitamin C is a water-soluble antioxidant, which protects skin from oxidation damage and rejuvenates photoaged skin. No-needle mesotherapy is a non-invasive rejuvenation technique depending on electric pulses, electroporation, and ultrasounds. These physicals factors result in deeper penetration of cosmetics. It is important to increase the penetration of L-ascorbic acid, thereby increasing the spectrum of its activity. The aim of the work was to assess the effectiveness of pure L-ascorbic acid activity in anti-aging therapy using a needle-free and micro-needling mesotherapy. The study was performed on a group of 35 healthy volunteers in accordance with the Declaration of Helsinki of 1964 and agreement of the Ethics Commissions no RNN/281/16/KE 2017. Women were randomized to mesotherapy or control group. Control group applied topically 2,5 ml serum containing 20% L-ascorbic acid with hydrate from strawberries, every 10 days for a period of 9 weeks. No-needle mesotherapy, on the left half of the face and micro-needling on the right with the same serum, was done in mesotherapy group. The pH of serum was 3.5-4, and the serum was prepared directly prior to the facial treatment. The skin parameters were measured at the beginning and before each treatment. The measurement of the forehead skin was done using Cutometer® (measurement of skin elasticity and firmness), Corneometer® (skin hydration measurement), Mexameter® (skin tone measurement). Also, the photographs were taken by Fotomedicus system. Additionally, the volunteers fulfilled the questionnaire. Serum was tested for microbiological purity and stability after the opening of the cosmetic. During the study, all of the volunteers were taken care of a dermatologist. The regular application of the serum has caused improvement of the skin parameters. Respectively, after 4 and 8 weeks improvement in hydration and elasticity has been seen (Corneometer®, Cutometer® results). Moreover, the number of hyper-pigmentated spots has decreased (Mexameter®). After 8 weeks the volunteers has claimed that the tested product has smoothing and moisturizing features. Subjective opinions indicted significant improvement of skin color and elasticity. The product containing the L-ascorbic acid used with intercellular penetration promoters demonstrates higher anti-aging efficiency than control. In vivo studies confirmed the effectiveness of serum and the impact of the active substance on skin firmness and elasticity, the degree of hydration and skin tone. Mesotherapy with pure L-ascorbic acid provides better diffusion of active substances through the skin.

Keywords: anti-aging, l-ascorbic acid, mesotherapy, promoters

Procedia PDF Downloads 247
72 Optimising Post-Process Heat Treatments of Selective Laser Melting-Produced Ti-6Al-4V Parts to Achieve Superior Mechanical Properties

Authors: Gerrit Ter Haar, Thorsten Becker, Deborah Blaine

Abstract:

The Additive Manufacturing (AM) process of Selective Laser Melting (SLM) has seen an exponential growth in sales and development in the past fifteen years. Whereas the capability of SLM was initially limited to rapid prototyping, progress in research and development (R&D) has allowed SLM to be capable of fully functional parts. This technology is still at a primitive stage and technical knowledge of the vast number of variables influencing final part quality is limited. Ongoing research and development of the sensitive printing process and post processes is of utmost importance in order to qualify SLM parts to meet international standards. Quality concerns in Ti-6Al-4V manufactured through SLM has been identified, which include: high residual stresses, part porosity, low ductility and anisotropic mechanical properties. Whereas significant quality improvements have been made through optimising printing parameters, research indicates as-produced part ductility to be a major limiting factor when compared to its wrought counterpart. This study aims at achieving an in-depth understanding of the underlining links between SLM produced Ti-6Al-4V microstructure and its mechanical properties. Knowledge of microstructural transformation kinetics of Ti-6Al-4V allows for the optimisation of post-process heat treatments thereby achieving the required process route to manufacture high quality SLM produced Ti-6Al-4V parts. Experimental methods used to evaluate the kinematics of microstructural transformation of SLM Ti-6Al-4V are: optical microscopy and electron backscatter diffraction. Results show that a low-temperature heat treatment is capable of transforming the as-produced, martensitic microstructure into a duel-phase microstructure exhibiting both a high strength and improved ductility. Furthermore, isotropy of mechanical properties can be achieved through certain annealing routes. Mechanical properties identical to that of wrought Ti-6Al-4V can, therefore, be achieved through an optimised process route.

Keywords: EBSD analysis, heat treatments, microstructural characterisation, selective laser melting, tensile behaviour, Ti-6Al-4V

Procedia PDF Downloads 393
71 An Investigative Study into Good Governance in the Non-Profit Sector in South Africa: A Systems Approach Perspective

Authors: Frederick M. Dumisani Xaba, Nokuthula G. Khanyile

Abstract:

There is a growing demand for greater accountability, transparency and ethical conduct based on sound governance principles in the developing world. Funders, donors and sponsors are increasingly demanding more transparency, better value for money and adherence to good governance standards. The drive towards improved governance measures is largely influenced by the need to ‘plug the leaks’, deal with malfeasance, engender greater levels of accountability and good governance and to ultimately attract further funding or investment. This is the case with the Non-Profit Organizations (NPOs) in South Africa in general, and in the province of KwaZulu-Natal in particular. The paper draws from the good governance theory, stakeholder theory and systems thinking to critically examine the requirements for good governance for the NPO sector from a theoretical and legislative point and to systematically looks at the contours of governance currently among the NPOs. The paper did this through the rigorous examination of the vignettes of cases of governance among selected NPOs based in KwaZulu-Natal. The study used qualitative and quantitative research methodologies through document analysis, literature review, semi-structured interviews, focus groups and statistical analysis from the various primary and secondary sources. It found some good cases of good governance but also found frightening levels of poor governance. There was an exponential growth of NPOs registered during the period under review, equally so there was an increase in cases of non-compliance to good governance practices. NPOs operate in an increasingly complex environment. There is contestation for influence and access to resources. Stakeholder management is poorly conceptualized and executed. Recognizing that the NPO sector operates in an environment characterized by complexity, constant changes, unpredictability, contestation, diversity and divergent views of different stakeholders, there is a need to apply legislative and systems thinking approaches to strengthen governance to withstand this turbulence through a capacity development model that recognizes these contextual and environmental challenges.

Keywords: good governance, non-profit organizations, stakeholder theory, systems theory

Procedia PDF Downloads 104
70 On the Survival of Individuals with Type 2 Diabetes Mellitus in the United Kingdom: A Retrospective Case-Control Study

Authors: Njabulo Ncube, Elena Kulinskaya, Nicholas Steel, Dmitry Pshezhetskiy

Abstract:

Life expectancy in the United Kingdom (UK) has been near constant since 2010, particularly for the individuals of 65 years and older. This trend has been also noted in several other countries. This slowdown in the increase of life expectancy was concurrent with the increase in the number of deaths caused by non-communicable diseases. Of particular concern is the world-wide exponential increase in the number of diabetes related deaths. Previous studies have reported increased mortality hazards among diabetics compared to non-diabetics, and on the differing effects of antidiabetic drugs on mortality hazards. This study aimed to estimate the all-cause mortality hazards and related life expectancies among type 2 diabetes (T2DM) patients in the UK using the time-variant Gompertz-Cox model with frailty. The study also aimed to understand the major causes of the change in life expectancy growth in the last decade. A total of 221 182 (30.8% T2DM, 57.6% Males) individuals aged 50 years and above, born between 1930 and 1960, inclusive, and diagnosed between 2000 and 2016, were selected from The Health Improvement Network (THIN) database of the UK primary care data and followed up to 31 December 2016. About 13.4% of participants died during the follow-up period. The overall all-cause mortality hazard ratio of T2DM compared to non-diabetic controls was 1.467 (1.381-1.558) and 1.38 (1.307-1.457) when diagnosed between 50 to 59 years and 60 to 74 years, respectively. The estimated life expectancies among T2DM individuals without further comorbidities diagnosed at the age of 60 years were 2.43 (1930-1939 birth cohort), 2.53 (1940-1949 birth cohort) and 3.28 (1950-1960 birth cohort) years less than those of non-diabetic controls. However, the 1950-1960 birth cohort had a steeper hazard function compared to the 1940-1949 birth cohort for both T2DM and non-diabetic individuals. In conclusion, mortality hazards for people with T2DM continue to be higher than for non-diabetics. The steeper mortality hazard slope for the 1950-1960 birth cohort might indicate the sub-population contributing to a slowdown in the growth of the life expectancy.

Keywords: T2DM, Gompetz-Cox model with frailty, all-cause mortality, life expectancy

Procedia PDF Downloads 104
69 Dynamic Modeling of the Impact of Chlorine on Aquatic Species in Urban Lake Ecosystem

Authors: Zhiqiang Yan, Chen Fan, Yafei Wang, Beicheng Xia

Abstract:

Urban lakes play an invaluable role in urban water systems such as flood control, water supply, and public recreation. However, over 38% of the urban lakes have suffered from severe eutrophication in China. Chlorine that could remarkably inhibit the growth of phytoplankton in eutrophic, has been widely used in the agricultural, aquaculture and industry in the recent past. However, little information has been reported regarding the effects of chlorine on the lake ecosystem, especially on the main aquatic species.To investigate the ecological response of main aquatic species and system stability to chlorine interference in shallow urban lakes, a mini system dynamic model was developed based on the competition and predation of main aquatic species and total phosphorus circulation. The main species of submerged macrophyte, phytoplankton, zooplankton, benthos, spiroggra and total phosphorus in water and sediment were used as variables in the model,while the interference of chlorine on phytoplankton was represented by an exponential attenuation equation. Furthermore, the eco-exergy expressing the development degree of ecosystem was used to quantify the complexity of the shallow urban lake. The model was validated using the data collected in the Lotus Lake in Guangzhoufrom1 October 2015 to 31 January 2016.The correlation coefficient (R), root mean square error-observations standard deviation ratio (RSR) and index of agreement (IOA) were calculated to evaluate accuracy and reliability of the model.The simulated values showed good qualitative agreement with the measured values of all components. The model results showed that chlorine had a notable inhibitory effect on Microcystis aeruginos,Rachionus plicatilis, Diaphanosoma brachyurum Liévin and Mesocyclops leuckarti (Claus).The outbreak of Spiroggra.spp. inhibited the growth of Vallisneria natans (Lour.) Hara, leading to a gradual decrease of eco-exergy and the breakdown of ecosystem internal equilibria. This study gives important insight into using chlorine to achieve eutrophication control and understand mechanism process.

Keywords: system dynamic model, urban lake, chlorine, eco-exergy

Procedia PDF Downloads 209
68 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study

Authors: Mohamed H. Khalil

Abstract:

Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.

Keywords: GIS Web-Based, base-map, water network, decision support system

Procedia PDF Downloads 63
67 Sustainable Development Change within Our Environs

Authors: Akinwale Adeyinka

Abstract:

Critical natural resources such as clean ground water, fertile topsoil, and biodiversity are diminishing at an exponential rate, orders of magnitude above that at which they can be regenerated. Based on news on world population record, over 6 billion people on earth, and almost a quarter million added each day, the scale of human activity and environmental impact is unprecedented. Soaring human population growth over the past century has created a visible challenge to earth’s life support systems. In addition, the world faces an onslaught of other environmental threats including degenerated global climate change, global warming, intensified acid rain, stratospheric ozone depletion and health threatening pollution. Overpopulation and the use of deleterious technologies combine to increase the scale of human activities to a level that underlies these entire problems. These intensifying trends cannot continue indefinitely, hopefully, through increased understanding and valuation of ecosystems and their services, earth’s basic life-support system will be protected for the future.To say the fact, human civilization is now the dominant cause of change in the global environment. Now that our relationship to the earth has change so utterly, we have to see that change and understand its implication. These are actually 2 aspects to the challenges which we should believe. The first is to realize that our power to harm the earth can indeed have global and even permanent effects. Second is to realize that the only way to understand our new role as a co-architect of nature is to see ourselves as part of a complex system that does operate according to the same simple rules of cause and effect we are used to. So understanding the physical/biological dimension of earth system is an important precondition for making sensible policy to protect our environment. Because we believe Sustainable Development Is a matter of reconciling respect for the environment, social equity and economic profitability. Also, we strongly believe that environmental protection is naturally about reducing air and water pollution, but it also includes the improvement of the environmental performance of existing process. That is why we should always have it at the heart of our business that the environmental problem is not our effect on the environment so much as our relationship with the environment. We should always think of being environmental friendly in our operation.

Keywords: Stratospheric ozone depletion ion , Climate Change, global warming, social equity and economic profitability

Procedia PDF Downloads 320
66 Spectroscopic Studies and Reddish Luminescence Enhancement with the Increase in Concentration of Europium Ions in Oxy-Fluoroborate Glasses

Authors: Mahamuda Sk, Srinivasa Rao Allam, Vijaya Prakash G.

Abstract:

The different concentrations of Eu3+ ions doped in Oxy-fluoroborate glasses of composition 60 B2O3-10 BaF2-10 CaF2-15 CaF2- (5-x) Al2O3 -x Eu2O3 where x = 0.1, 0.5, 1.0 and 2.0 mol%, have been prepared by conventional melt quenching technique and are characterized through absorption and photoluminescence (PL), decay, color chromaticity and Confocal measurements. The absorption spectra of all the glasses consists of six peaks corresponding to the transitions 7F0→5D2, 7F0→5D1, 7F1→5D1, 7F1→5D0, 7F0→7F6 and 7F1→7F6 respectively. The experimental oscillator strengths with and without thermal corrections have been evaluated using absorption spectra. Judd-Ofelt (JO) intensity parameters (Ω2 and Ω4) have been evaluated from the photoluminescence spectra of all the glasses. PL spectra of all the glasses have been recorded at excitation wavelengths 395 nm (conventional excitation source) and 410 nm (diode laser) to observe the intensity variation in the PL spectra. All the spectra consists of five emission peaks corresponding to the transitions 5D0→7FJ (J = 0, 1, 2, 3 and 4). Surprisingly no concentration quenching is observed on PL spectra. Among all the glasses the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum intensity for the transition 5D0→7F2 (612 nm) in bright red region. The JO parameters derived from the photoluminescence spectra have been used to evaluate the essential radiative properties such as transition probability (A), radiative lifetime (τR), branching ratio (βR) and peak stimulated emission cross-section (σse) for the 5D0→7FJ (J = 0, 1, 2, 3 and 4) transitions of the Eu3+ ions. The decay rates of the 5D0 fluorescent level of Eu3+ ions in the title glasses are found to be single exponential for all the studied Eu3+ ion concentrations. A marginal increase in lifetime of the 5D0 level has been noticed with increase in Eu3+ ion concentration from 0.1 mol% to 2.0 mol%. Among all the glasses, the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum values of branching ratio, stimulated emission cross-section and quantum efficiency for the transition 5D0→7F2 (612 nm) in bright red region. The color chromaticity coordinates are also evaluated to confirm the reddish luminescence from these glasses. These color coordinates exactly fall in the bright red region. Confocal images also recorded to confirm reddish luminescence from these glasses. From all the obtained results in the present study, it is suggested that the glass with 2.0 mol% of Eu3+ ion concentration is suitable to emit bright red color laser.

Keywords: Europium, Judd-Ofelt parameters, laser, luminescence

Procedia PDF Downloads 217
65 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 69
64 Climate Changes Impact on Artificial Wetlands

Authors: Carla Idely Palencia-Aguilar

Abstract:

Artificial wetlands play an important role at Guasca Municipality in Colombia, not only because they are used for the agroindustry, but also because more than 45 species were found, some of which are endemic and migratory birds. Remote sensing was used to determine the changes in the area occupied by water of artificial wetlands by means of Aster and Modis images for different time periods. Evapotranspiration was also determined by three methods: Surface Energy Balance System-Su (SEBS) algorithm, Surface Energy Balance- Bastiaanssen (SEBAL) algorithm, and Potential Evapotranspiration- FAO. Empirical equations were also developed to determine the relationship between Normalized Difference Vegetation Index (NDVI) versus net radiation, ambient temperature and rain with an obtained R2 of 0.83. Groundwater level fluctuations on a daily basis were studied as well. Data from a piezometer placed next to the wetland were fitted with rain changes (with two weather stations located at the proximities of the wetlands) by means of multiple regression and time series analysis, the R2 from the calculated and measured values resulted was higher than 0.98. Information from nearby weather stations provided information for ordinary kriging as well as the results for the Digital Elevation Model (DEM) developed by using PCI software. Standard models (exponential, spherical, circular, gaussian, linear) to describe spatial variation were tested. Ordinary Cokriging between height and rain variables were also tested, to determine if the accuracy of the interpolation would increase. The results showed no significant differences giving the fact that the mean result of the spherical function for the rain samples after ordinary kriging was 58.06 and a standard deviation of 18.06. The cokriging using for the variable rain, a spherical function; for height variable, the power function and for the cross variable (rain and height), the spherical function had a mean of 57.58 and a standard deviation of 18.36. Threatens of eutrophication were also studied, given the unconsciousness of neighbours and government deficiency. Water quality was determined over the years; different parameters were studied to determine the chemical characteristics of water. In addition, 600 pesticides were studied by gas and liquid chromatography. Results showed that coliforms, nitrogen, phosphorous and prochloraz were the most significant contaminants.

Keywords: DEM, evapotranspiration, geostatistics, NDVI

Procedia PDF Downloads 95
63 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 79
62 Optimization of Maintenance of PV Module Arrays Based on Asset Management Strategies: Case of Study

Authors: L. Alejandro Cárdenas, Fernando Herrera, David Nova, Juan Ballesteros

Abstract:

This paper presents a methodology to optimize the maintenance of grid-connected photovoltaic systems, considering the cleaning and module replacement periods based on an asset management strategy. The methodology is based on the analysis of the energy production of the PV plant, the energy feed-in tariff, and the cost of cleaning and replacement of the PV modules, with the overall revenue received being the optimization variable. The methodology is evaluated as a case study of a 5.6 kWp solar PV plant located on the Bogotá campus of the Universidad Nacional de Colombia. The asset management strategy implemented consists of assessing the PV modules through visual inspection, energy performance analysis, pollution, and degradation. Within the visual inspection of the plant, the general condition of the modules and the structure is assessed, identifying dust deposition, visible fractures, and water accumulation on the bottom. The energy performance analysis is performed with the energy production reported by the monitoring systems and compared with the values estimated in the simulation. The pollution analysis is performed using the soiling rate due to dust accumulation, which can be modelled by a black box with an exponential function dependent on historical pollution values. The pollution rate is calculated with data collected from the energy generated during two years in a photovoltaic plant on the campus of the National University of Colombia. Additionally, the alternative of assessing the temperature degradation of the PV modules is evaluated by estimating the cell temperature with parameters such as ambient temperature and wind speed. The medium-term energy decrease of the PV modules is assessed with the asset management strategy by calculating the health index to determine the replacement period of the modules due to degradation. This study proposes a tool for decision making related to the maintenance of photovoltaic systems. The above, projecting the increase in the installation of solar photovoltaic systems in power systems associated with the commitments made in the Paris Agreement for the reduction of CO2 emissions. In the Colombian context, it is estimated that by 2030, 12% of the installed power capacity will be solar PV.

Keywords: asset management, PV module, optimization, maintenance

Procedia PDF Downloads 18
61 Electroactivity of Clostridium saccharoperbutylacetonicum 1-4N during Carbon Dioxide Reduction in a Bioelectrosynthesis System

Authors: Carlos A. Garcia-Mogollon, Juan C. Quintero-Diaz, Claudio Avignone-Rossa

Abstract:

Clostridium saccharoperbutylacetonicum 1-4N (Csb 1-4N) is an industrial reference strain for Acetone-Butanol-Ethanol (ABE) fermentation. Csb 1-4N is a solventogenic clostridium and H₂ producer with a metabolic profile that makes it a good candidate for Bioelectrosynthesis System (BES). The aim of this study was to evaluate the electroactivity of Csb 1-4N by cyclic voltammetry technique (CV). The Bioelectrosynthesis fermentation (BES) started in a Triptone-Yeast extract (TY) medium with trace elements and vitamins, Complex Nitrogen Source (CNS), and bicarbonate (NaHCO₃, 4g/L) as a carbon source, run at -600mVAg/AgCl and adding 200uM NADH. The six BES batches were performed with different media composition with and without NADH, CNS, HCO₃⁻ , and applied potential. The CV was performed as three-electrode system: platinum slice working electrode (WE), nickel contra electrode (CE) and reference electrode Ag/AgCl (ER). CVs were run in a potential range of -0.7V to 0.7V vs. VAg/AgCl at a scan rate 10mV/s. A CV recorded using different NaHCO₃ concentrations (0.25; 0.5; 1.0; 4g/L) were obtained. BES fermentation samples were centrifuged (3000 rpm, 5min, 4C), and supernatant (7mL) was used. CVs were obtained for Csb1-4N BES culture cell-free supernatant at 0h, 24h, and 48h. The electrochemical analysis was carried out with a PalmSens 4.0 potentiostat/galvanostat controlled with the PStrace 5.7 software, and CVs curves were characterized by reduction and oxidation currents and reduction and oxidation peaks. The CVs obtained for NaHCO₃ solutions showed that the reduction current and oxidation current decreased as the NaHCO₃ concentration was decreased. All reduction and oxidation currents decreased until exponential growth stop (24h), independence of initial cathodic current, except in medium with trace elements, vitamins, and NaHCO3, in which reduction current was around half at 24h and followed decreasing at 48. In this medium, Csb1-4N did not grow, but pH was increased, indicating that NaHCO₃ was reduced as the reduction current decreased. In general, at 48h reduction currents did not present important changes between different mediums in BES cultures. In terms of intensities in the peaks (Ip) did not present important variations; except with Ipa and Ipc in BES culture with NaHCO₃ and NADH added are higher than peaks in other cultures. Based on results, cathodic and anodic currents changes were induced by NaHCO₃ reduction reactions during Csb1-4N metabolic activity in different BES experiments.

Keywords: clostridium saccharoperbutylacetonicum 1-4N, bioelectrosynthesis, carbon dioxide fixation, cyclic voltammetry

Procedia PDF Downloads 114
60 Hydrothermal Aging Behavior of Continuous Carbon Fiber Reinforced Polyamide 6 Composites

Authors: Jifeng Zhang , Yongpeng Lei

Abstract:

Continuous carbon fiber reinforced polyamide 6 (CF/PA6) composites are potential for application in the automotive industry due to their high specific strength and stiffness. However, PA6 resin is sensitive to the moisture in the hydrothermal environment and CF/PA6 composites might undergo several physical and chemical changes, such as plasticization, swelling, and hydrolysis, which induces a reduction of mechanical properties. So far, little research has been reported on the assessment of the effects of hydrothermal aging on the mechanical properties of continuous CF/PA6 composite. This study deals with the effects of hydrothermal aging on moisture absorption and mechanical properties of polyamide 6 (PA6) and polyamide 6 reinforced with continuous carbon fibers composites (CF/PA6) by immersion in distilled water at 30 ℃, 50 ℃, 70 ℃, and 90 ℃. Degradation of mechanical performance has been monitored, depending on the water absorption content and the aging temperature. The experimental results reveal that under the same aging condition, the PA6 resin absorbs more water than the CF/PA6 composite, while the water diffusion coefficient of CF/PA6 composite is higher than that of PA6 resin because of interfacial diffusion channel. In mechanical properties degradation process, an exponential reduction in tensile strength and elastic modulus are observed in PA6 resin as aging temperature and water absorption content increases. The degradation trend of flexural properties of CF/PA6 is the same as that of tensile properties of PA6 resin. Moreover, the water content plays a decisive role in mechanical degradation compared with aging temperature. In contrast, hydrothermal environment has mild effect on the tensile properties of CF/PA6 composites. The elongation at breakage of PA6 resin and CF/PA6 reaches the highest value when their water content reaches 6% and 4%, respectively. Dynamic mechanical analysis (DMA) and scanning electron microscope (SEM) were also used to explain the mechanism of mechanical properties alteration. After exposed to the hydrothermal environment, the Tg (glass transition temperature) of samples decreases dramatically with water content increase. This reduction can be ascribed to the plasticization effect of water. For the unaged specimens, the fibers surface is coated with resin and the main fracture mode is fiber breakage, indicating that a good adhesion between fiber and matrix. However, with absorbed water content increasing, the fracture mode transforms to fiber pullout. Finally, based on Arrhenius methodology, a predictive model with relate to the temperature and water content has been presented to estimate the retention of mechanical properties for PA6 and CF/PA6.

Keywords: continuous carbon fiber reinforced polyamide 6 composite, hydrothermal aging, Arrhenius methodology, interface

Procedia PDF Downloads 104
59 Occasional Word-Formation in Postfeminist Fiction: Cognitive Approach

Authors: Kateryna Nykytchenko

Abstract:

Modern fiction and non-fiction writers commonly use their own lexical and stylistic devices to capture a reader’s attention and bring certain thoughts and feelings to his reader. Among such devices is the appearance of one of the neologic notions – individual author’s formations: occasionalisms or nonce words. To a significant extent, the host of examples of new words occurs in chick lit genre which has experienced exponential growth in recent years. Chick Lit is a new-millennial postfeminist fiction which focuses primarily on twenty- to thirtysomething middle-class women. It brings into focus the image of 'a new woman' of the 21st century who is always fallible, funny. This paper aims to investigate different types of occasional word-formation which reflect cognitive mechanisms of conveying women’s perception of the world. Chick lit novels of Irish author Marian Keyes present genuinely innovative mixture of forms, both literary and nonliterary which is displayed in different types of occasional word-formation processes such as blending, compounding, creative respelling, etc. Crossing existing mental and linguistic boundaries, adopting herself to new and overlapping linguistic spaces, chick lit author creates new words which demonstrate the result of development and progress of language and the relationship between language, thought and new reality, ultimately resulting in hybrid word-formation (e.g. affixation or pseudoborrowing). Moreover, this article attempts to present the main characteristics of chick-lit fiction genre with the help of the Marian Keyes’s novels and their influence on occasionalisms. There has been a lack of research concerning cognitive nature of occasionalisms. The current paper intends to account for occasional word-formation as a set of interconnected cognitive mechanisms, operations and procedures meld together to create a new word. The results of the generalized analysis solidify arguments that the kind of new knowledge an occasionalism manifests is inextricably linked with cognitive procedure underlying it, which results in corresponding type of word-formation processes. In addition, the findings of the study reveal that the necessity of creating occasionalisms in postmodern fiction novels arises from the need to write in a new way keeping up with a perpetually developing world, and thus the evolution of the speaker herself and her perception of the world.

Keywords: Chick Lit, occasionalism, occasional word-formation, cognitive linguistics

Procedia PDF Downloads 161
58 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 353
57 Analyzing the Commentator Network Within the French YouTube Environment

Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes

Abstract:

To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.

Keywords: YouTube, social networks, economics, consumer behaviour

Procedia PDF Downloads 50
56 Aptamers: A Potential Strategy for COVID-19 Treatment

Authors: Mohamad Ammar Ayass, Natalya Griko, Victor Pashkov, Wanying Cao, Kevin Zhu, Jin Zhang, Lina Abi Mosleh

Abstract:

Respiratory syndrome coronavirus 2 (SARS-CoV-2) is the causative agent for coronavirus disease 2019 (COVID-19). Early evidence pointed at the angiotensin-converting enzyme 2 (ACE-2) expressed on the epithelial cells of the lung as the main entry point of SARS-CoV-2 into the cells. The viral entry is mediated by the binding of the Receptor Binding Domain (RBD) of the spike protein that is expressed on the surface of the virus to the ACE-2 receptor. As the number of SARS-CoV-2 variants continues to increase, mutations arising in the RBD of SARS-CoV-2 may lead to the ineffectiveness of RBD targeted neutralizing antibodies. To address this limitation, the objective of this study is to develop a combination of aptamers that target different regions of the RBD, preventing the binding of the spike protein to ACE-2 receptor and subsequent viral entry and replication. A safe and innovative biomedical tool was developed to inhibit viral infection and reduce the harms of COVID-19. In the present study, DNA aptamers were developed against a recombinant trimer S protein using the Systematic Evolution of Ligands by Exponential enrichment (SELEX). Negative selection was introduced at round number 7 to select for aptamers that bind specifically to the RBD domain. A series of 9 aptamers (ADI2010, ADI2011, ADI201L, ADI203L, ADI205L, ADIR68, ADIR74, ADIR80, ADIR83) were selected and characterized with high binding affinity and specificity to the RBD of the spike protein. Aptamers (ADI25, ADI2009, ADI203L) were able to bind and pull down endogenous spike protein expressed on the surface of SARS-CoV-2 virus in COVID-19 positive patient samples and determined by liquid chromatography- tandem mass spectrometry analysis (LC-MS/MS). LC-MS/MS data confirmed that aptamers can bind to the RBD of the spike protein. Furthermore, results indicated that the combination of the 9 best aptamers inhibited the binding of the purified trimer spike protein to the ACE-2 receptor found on the surface of Vero E6 cells. In the same experiment, the combined aptamers displayed a better neutralizing effect than antibodies. The data suggests that the selected aptamers could be used in therapy to neutralize the effect of the SARS-CoV-2 virus by inhibiting the interaction between the RBD and ACE-2 receptor, preventing viral entry into target cells and therefore blocking viral replication.

Keywords: aptamer, ACE-2 receptor, binding inhibitor, COVID-19, spike protein, SARS-CoV-2, treatment

Procedia PDF Downloads 169
55 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 344
54 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 109
53 An Autonomous Space Debris-Removal System for Effective Space Missions

Authors: Shriya Chawla, Vinayak Malhotra

Abstract:

Space exploration has noted an exponential rise in the past two decades. The world has started probing the alternatives for efficient and resourceful sustenance along with utilization of advanced technology viz., satellites on earth. Space propulsion forms the core of space exploration. Of all the issues encountered, space debris has increasingly threatened the space exploration and propulsion. The efforts have resulted in the presence of disastrous space debris fragments orbiting the earth at speeds up to several kilometres per hour. Debris are well known as a potential damage to the future missions with immense loss of resources, mankind, and huge amount of money is invested in active research on them. Appreciable work had been done in the past relating to active space debris-removal technologies such as harpoon, net, drag sail. The primary emphasis is laid on confined removal. In recently, remove debris spacecraft was used for servicing and capturing cargo ships. Airbus designed and planned the debris-catching net experiment, aboard the spacecraft. The spacecraft represents largest payload deployed from the space station. However, the magnitude of the issue suggests that active space debris-removal technologies, such as harpoons and nets, still would not be enough. Thus, necessitating the need for better and operative space debris removal system. Techniques based on diverting the path of debris or the spacecraft to avert damage have turned out minimal usage owing to limited predictions. Present work focuses on an active hybrid space debris removal system. The work is motivated by the need to have safer and efficient space missions. The specific objectives of the work are 1) to thoroughly analyse the existing and conventional debris removal techniques, their working, effectiveness and limitations under varying conditions, 2) to understand the role of key controlling parameters in coupled operation of debris capturing and removal. The system represents the utilization of the latest autonomous technology available with an adaptable structural design for operations under varying conditions. The design covers advantages of most of the existing technologies while removing the disadvantages. The system is likely to enhance the probability of effective space debris removal. At present, systematic theoretical study is being carried out to thoroughly observe the effects of pseudo-random debris occurrences and to originate an optimal design with much better features and control.

Keywords: space exploration, debris removal, space crafts, space accidents

Procedia PDF Downloads 140
52 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 435
51 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 165