Search results for: gas distribution network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9247

Search results for: gas distribution network

5377 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study

Authors: Mohamed H. Khalil

Abstract:

Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.

Keywords: GIS Web-Based, base-map, water network, decision support system

Procedia PDF Downloads 88
5376 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar

Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo

Abstract:

The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.

Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB

Procedia PDF Downloads 86
5375 A Corpus-Based Analysis of "MeToo" Discourse in South Korea: Coverage Representation in Korean Newspapers

Authors: Sun-Hee Lee, Amanda Kraley

Abstract:

The “MeToo” movement is a social movement against sexual abuse and harassment. Though the hashtag went viral in 2017 following different cultural flashpoints in different countries, the initial response was quiet in South Korea. This radically changed in January 2018, when a high-ranking senior prosecutor, Seo Ji-hyun, gave a televised interview discussing being sexually assaulted by a colleague. Acknowledging public anger, particularly among women, on the long-existing problems of sexual harassment and abuse, the South Korean media have focused on several high-profile cases. Analyzing the media representation of these cases is a window into the evolving South Korean discourse around “MeToo.” This study presents a linguistic analysis of “MeToo” discourse in South Korea by utilizing a corpus-based approach. The term corpus (pl. corpora) is used to refer to electronic language data, that is, any collection of recorded instances of spoken or written language. A “MeToo” corpus has been collected by extracting newspaper articles containing the keyword “MeToo” from BIGKinds, big data analysis, and service and Nexis Uni, an online academic database search engine, to conduct this language analysis. The corpus analysis explores how Korean media represent accusers and the accused, victims and perpetrators. The extracted data includes 5,885 articles from four broadsheet newspapers (Chosun, JoongAng, Hangyore, and Kyunghyang) and 88 articles from two Korea-based English newspapers (Korea Times and Korea Herald) between January 2017 and November 2020. The information includes basic data analysis with respect to keyword frequency and network analysis and adds refined examinations of select corpus samples through naming strategies, semantic relations, and pragmatic properties. Along with the exponential increase of the number of articles containing the keyword “MeToo” from 104 articles in 2017 to 3,546 articles in 2018, the network and keyword analysis highlights ‘US,’ ‘Harvey Weinstein’, and ‘Hollywood,’ as keywords for 2017, with articles in 2018 highlighting ‘Seo Ji-Hyun, ‘politics,’ ‘President Moon,’ ‘An Ui-Jeong, ‘Lee Yoon-taek’ (the names of perpetrators), and ‘(Korean) society.’ This outcome demonstrates the shift of media focus from international affairs to domestic cases. Another crucial finding is that word ‘defamation’ is widely distributed in the “MeToo” corpus. This relates to the South Korean legal system, in which a person who defames another by publicly alleging information detrimental to their reputation—factual or fabricated—is punishable by law (Article 307 of the Criminal Act of Korea). If the defamation occurs on the internet, it is subject to aggravated punishment under the Act on Promotion of Information and Communications Network Utilization and Information Protection. These laws, in particular, have been used against accusers who have publicly come forward in the wake of “MeToo” in South Korea, adding an extra dimension of risk. This corpus analysis of “MeToo” newspaper articles contributes to the analysis of the media representation of the “MeToo” movement and sheds light on the shifting landscape of gender relations in the public sphere in South Korea.

Keywords: corpus linguistics, MeToo, newspapers, South Korea

Procedia PDF Downloads 218
5374 Strategic Risk Issues for Film Distributors of Hindi Film Industry in Mumbai: A Grounded Theory Approach

Authors: Rashmi Dyondi, Shishir K. Jha

Abstract:

The purpose of the paper is to address the strategic risk issues surrounding Hindi film distribution in Mumbai for a film distributor, who acts as an entrepreneur when launching a product (movie) in the market (film territory).The paper undertakes a fundamental review of films and risk in the Hindi film industry and applies Grounded Theory technique to understand the complex phenomena of risk taking behavior of the film distributors (both independent and studios) in Mumbai. Rich in-depth interviews with distributors are coded to develop core categories through constant comparison leading to conceptualization of the phenomena of interest. This paper is a first-of-its-kind-attempt to understand risk behavior of a distributor, which is akin to entrepreneurial risk behavior under conditions of uncertainty. Unlike extensive scholarly work on dynamics of Hollywood motion picture industry, Hindi film industry is an under-researched area till now. Especially how do film distributors perceive risk is an unexplored study for the Hindi film industry. Films are unique experience products and the film distributor acts as an entrepreneur assuming high risks given the uncertainty in the motion picture business. With the entry of mighty corporate studios and astronomical film budgets posing serious business threats to the independent distributors, there is a need for an in-depth qualitative enquiry (applying grounded theory technique) for unraveling the definition of risk for the independent distributors in Mumbai vis-à-vis the corporate studios. Need for good content was a common challenge to both the groups in the present state of the industry, however corporate studios with their distinct ideologies, focus on own productions and financial power faced different set of challenges than the independents (like achieving sustainability in business). Softer issues like market goodwill and relations with producers, honesty in business dealings and transparency came out to be clear markers for success of independents in long run. The findings from the qualitative analysis stress on different elements of risk and challenges as perceived by the two groups of distributors in the Hindi film industry and provide a future research agenda for empirical investigation of determinants of box-office success of Hindi films distributed in Mumbai.

Keywords: entrepreneurial risk behavior, film distribution strategy, Hindi film industry, risk

Procedia PDF Downloads 311
5373 Water Ingress into Underground Mine Voids in the Central Rand Goldfields Area, South Africa-Fluid Induced Seismicity

Authors: Artur Cichowicz

Abstract:

The last active mine in the Central Rand Goldfields area (50 km x 15 km) ceased operations in 2008. This resulted in the closure of the pumping stations, which previously maintained the underground water level in the mining voids. As a direct consequence of the water being allowed to flood the mine voids, seismic activity has increased directly beneath the populated area of Johannesburg. Monitoring of seismicity in the area has been on-going for over five years using the network of 17 strong ground motion sensors. The objective of the project is to improve strategies for mine closure. The evolution of the seismicity pattern was investigated in detail. Special attention was given to seismic source parameters such as magnitude, scalar seismic moment and static stress drop. Most events are located within historical mine boundaries. The seismicity pattern shows a strong relationship between the presence of the mining void and high levels of seismicity; no seismicity migration patterns were observed outside the areas of old mining. Seven years after the pumping stopped, the evolution of the seismicity has indicated that the area is not yet in equilibrium. The level of seismicity in the area appears to not be decreasing over time since the number of strong events, with Mw magnitudes above 2, is still as high as it was when monitoring began over five years ago. The average rate of seismic deformation is 1.6x1013 Nm/year. Constant seismic deformation was not observed over the last 5 years. The deviation from the average is in the order of 6x10^13 Nm/year, which is a significant deviation. The variation of cumulative seismic moment indicates that a constant deformation rate model is not suitable. Over the most recent five year period, the total cumulative seismic moment released in the Central Rand Basin was 9.0x10^14 Nm. This is equivalent to one earthquake of magnitude 3.9. This is significantly less than what was experienced during the mining operation. Characterization of seismicity triggered by a rising water level in the area can be achieved through the estimation of source parameters. Static stress drop heavily influences ground motion amplitude, which plays an important role in risk assessments of potential seismic hazards in inhabited areas. The observed static stress drop in this study varied from 0.05 MPa to 10 MPa. It was found that large static stress drops could be associated with both small and large events. The temporal evolution of the inter-event time provides an understanding of the physical mechanisms of earthquake interaction. Changes in the characteristics of the inter-event time are produced when a stress change is applied to a group of faults in the region. Results from this study indicate that the fluid-induced source has a shorter inter-event time in comparison to a random distribution. This behaviour corresponds to a clustering of events, in which short recurrence times tend to be close to each other, forming clusters of events.

Keywords: inter-event time, fluid induced seismicity, mine closure, spectral parameters of seismic source

Procedia PDF Downloads 282
5372 poly(N-Isopropylacrylamide)-Polyvinyl Alcohol Semi-Interpenetrating Network Hydrogel for Wound Dressing

Authors: Zi-Yan Liao, Shan-Yu Zhang, Ya-Xian Lin, Ya-Lun Lee, Shih-Chuan Huang, Hong-Ru Lin

Abstract:

Traditional wound dressings, such as gauze, bandages, etc., are easy to adhere to the tissue fluid exuded from the wound, causing secondary damage to the wound during removal. This study takes this as the idea to develop a hydrogel dressing, to explore that the dressing will not cause secondary damage to the wound when it is torn off, and at the same time, create an environment conducive to wound healing. First, the temperature-sensitive material N-isopropylacrylamide (NIPAAm) was used as the substrate. Due to its low mechanical properties, the hydrogel would break due to pulling during human activities. Polyvinyl alcohol (PVA) interpenetrates into it to enhance the mechanical properties, and a semi-interpenetration (semi-IPN) composed of poly(N-isopropylacrylamide) (PNIPAAm) and polyvinyl alcohol (PVA) was prepared by free radical polymerization. PNIPAAm was cross-linked with N,N'-methylenebisacrylamide (NMBA) in an ice bath in the presence of linear PVA, and tetramethylhexamethylenediamine (TEMED) was added as a promoter to speed up the gel formation. The polymerization stage was carried out at 16°C for 17 hours and washed with distilled water for three days after gel formation, and the water was changed several times in the middle to complete the preparation of semi-IPN hydrogel. Finally, various tests were used to analyze the effects of different ratios of PNIPAAm and PVA on semi-IPN hydrogels. In the swelling test, it was found that the maximum swelling ratio can reach about 50% under the environment of 21°C, and the higher the ratio of PVA, the more water can be absorbed. The saturated moisture content test results show that when more PVA is added, the higher saturated water content. The water vapor transmission rate test results show that the value of the semi-IPN hydrogel is about 57 g/m²/24hr, which is not much related to the proportion of PVA. It is found in the LCST test compared with the PNIPAAm hydrogel; the semi-IPN hydrogel possesses the same critical solution temperature (30-35°C). The semi-IPN hydrogel prepared in this study has a good effect on temperature response and has the characteristics of thermal sensitivity. It is expected that after improvement, it can be used in the treatment of surface wounds, replacing the traditional dressing shortcoming.

Keywords: hydrogel, N-isopropylacrylamide, polyvinyl alcohol, hydrogel wound dressing, semi-interpenetrating polymer network

Procedia PDF Downloads 75
5371 Detection Characteristics of the Random and Deterministic Signals in Antenna Arrays

Authors: Olesya Bolkhovskaya, Alexey Davydov, Alexander Maltsev

Abstract:

In this paper approach to incoherent signal detection in multi-element antenna array are researched and modeled. Two types of useful signals with unknown wavefront were considered. First one is deterministic (Barker code), the second one is random (Gaussian distribution). The derivation of the sufficient statistics took into account the linearity of the antenna array. The performance characteristics and detecting curves are modeled and compared for different useful signals parameters and for different number of elements of the antenna array. Results of researches in case of some additional conditions can be applied to a digital communications systems.

Keywords: antenna array, detection curves, performance characteristics, quadrature processing, signal detection

Procedia PDF Downloads 401
5370 Societal Resilience Assessment in the Context of Critical Infrastructure Protection

Authors: Hannah Rosenqvist, Fanny Guay

Abstract:

Critical infrastructure protection has been an important topic for several years. Programmes such as the European Programme for Critical Infrastructure Protection (EPCIP), Critical Infrastructure Warning Information Network (CIWIN) and the European Reference Network for Critical Infrastructure Protection (ENR-CIP) have been the pillars to the work done since 2006. However, measuring critical infrastructure resilience has not been an easy task. This has to do with the fact that the concept of resilience has several definitions and is applied in different domains such as engineering and social sciences. Since June 2015, the EU project IMPROVER has been focusing on developing a methodology for implementing a combination of societal, organizational and technological resilience concepts, in the hope to increase critical infrastructure resilience. For this paper, we performed research on how to include societal resilience as a form of measurement of the context of critical infrastructure resilience. Because one of the main purposes of critical infrastructure (CI) is to deliver services to the society, we believe that societal resilience is an important factor that should be considered when assessing the overall CI resilience. We found that existing methods for CI resilience assessment focus mainly on technical aspects and therefore that is was necessary to develop a resilience model that take social factors into account. The model developed within the project IMPROVER aims to include the community’s expectations of infrastructure operators as well as information sharing with the public and planning processes. By considering such aspects, the IMPROVER framework not only helps operators to increase the resilience of their infrastructures on the technical or organizational side, but aims to strengthen community resilience as a whole. This will further be achieved by taking interdependencies between critical infrastructures into consideration. The knowledge gained during this project will enrich current European policies and practices for improved disaster risk management. The framework for societal resilience analysis is based on three dimensions for societal resilience; coping capacity, adaptive capacity and transformative capacity which are capacities that have been recognized throughout a widespread literature review in the field. A set of indicators have been defined that describe a community’s maturity within these resilience dimensions. Further, the indicators are categorized into six community assets that need to be accessible and utilized in such a way that they allow responding to changes and unforeseen circumstances. We conclude that the societal resilience model developed within the project IMPROVER can give a good indication of the level of societal resilience to critical infrastructure operators.

Keywords: community resilience, critical infrastructure protection, critical infrastructure resilience, societal resilience

Procedia PDF Downloads 225
5369 The Structure and Development of a Wing Tip Vortex under the Effect of Synthetic Jet Actuation

Authors: Marouen Dghim, Mohsen Ferchichi

Abstract:

The effect of synthetic jet actuation on the roll-up and the development of a wing tip vortex downstream a square-tipped rectangular wing was investigated experimentally using hotwire anemometry. The wing is equipped with a hallow cavity designed to generate a high aspect ratio synthetic jets blowing at an angles with respect to the spanwise direction. The structure of the wing tip vortex under the effect of fluidic actuation was examined at a chord Reynolds number Re_c=8×10^4. An extensive qualitative study on the effect of actuation on the spanwise pressure distribution at c⁄4 was achieved using pressure scanner measurements in order to determine the optimal actuation parameters namely, the blowing momentum coefficient, Cμ, and the non-dimensionalized actuation frequency, F^+. A qualitative study on the effect of actuation parameters on the spanwise pressure distribution showed that optimal actuation frequencies of the synthetic jet were found within the range amplified by both long and short wave instabilities where spanwise pressure coefficients exhibited a considerable decrease by up to 60%. The vortex appeared larger and more diffuse than that of the natural vortex case. Operating the synthetic jet seemed to introduce unsteadiness and turbulence into the vortex core. Based on the ‘a priori’ optimal selected parameters, results of the hotwire wake survey indicated that the actuation achieved a reduction and broadening of the axial velocity deficit. A decrease in the peak tangential velocity associated with an increase in the vortex core radius was reported as a result of the accelerated radial transport of angular momentum. Peak vorticity level near the core was also found to be largely diffused as a direct result of the increased turbulent mixing within the vortex. The wing tip vortex a exhibited a reduced strength and a diffused core as a direct result of increased turbulent mixing due to the presence of turbulent small scale vortices within its core. It is believed that the increased turbulence within the vortex due to the synthetic jet control was the main mechanism associated with the decreased strength and increased size of the wing tip vortex as it evolves downstream. A comparison with a ‘non-optimal’ case was included to demonstrate the effectiveness of selecting the appropriate control parameters. The Synthetic Jet will be operated at various actuation configurations and an extensive parametric study is projected to determine the optimal actuation parameters.

Keywords: flow control, hotwire anemometry, synthetic jet, wing tip vortex

Procedia PDF Downloads 432
5368 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 147
5367 Characterisation of Meteorological Drought at Sub-Catchment Scale in Afghanistan Using Time-Series Climate Data

Authors: Yun Chen, David Penton, Fazlul Karim, Santosh Aryal, Shahriar Wahid, Peter Taylor, Susan M. Cuddy

Abstract:

Droughts have severely affected Afghanistan over the last four decades, leading to critical food shortages where two-thirds of the country’s population are in a food crisis. Long years of conflict have lowered the country’s ability to deal with hazards such as drought, which can rapidly escalate into disasters. Understanding the spatial and temporal distribution of droughts is needed to be able to respond effectively to disasters and plan for future occurrences. This study used Standardized Precipitation Evapotranspiration Index (SPEI) at monthly, seasonal, and annual temporal scales to map the spatiotemporal change dynamics of drought characteristics (distribution, frequency, duration, and severity) in Afghanistan. SPEI indices were mapped for river basins, disaggregated into 189 sub-catchments, using monthly precipitation and potential evapotranspiration derived from temperature station observations from 1980 to 2017. The results show these multi-dimensional drought characteristics vary along different years, change among sub-catchments, and differ across temporal scales. During the 38 years, the driest decade and period are the 2000s and 1999–2022, respectively. The 2000–01 water year is the driest, with the whole country experiencing ‘severe’ to ‘extreme’ drought, more than 53% (87 sub-catchments) suffering the worst drought in history, and about 58% (94 sub-catchments) having ‘very frequent’ drought (7 to 8 months) or ‘extremely frequent’ drought (9 to 10 months). The estimated seasonal duration and severity present significant variations across the study area and throughout the study period. The nation also suffered from recurring droughts with varying length and intensity in 2004, 2006, 2008, and, most recently, 2011. There is a trend towards increasing drought with longer duration and higher severity extending all over sub-catchments from southeast to north and central regions. These datasets and maps help to fill the knowledge gap on detailed sub-catchment scale meteorological drought characteristics in Afghanistan. The study findings improve our understanding of the influences of climate change on drought dynamics and can guide catchment planning for reliable adaptation to and mitigation against future droughts.

Keywords: SPEI, precipitation, evapotranspiration, climate extremes

Procedia PDF Downloads 89
5366 Nonlinear Heat Transfer in a Spiral Fin with a Period Base Temperature

Authors: Kuo-Teng Tsai, You-Min Huang

Abstract:

In this study, the problem of a spiral fin with a period base temperature is analyzed by using the Adomian decomposition method. The Adomian decomposition method is a useful and practice method to solve the nonlinear energy equation which are associated with the heat radiation. The period base temperature is around a mean value. The results including the temperature distribution and the heat flux from the spiral fin base can be calculated directly. The results also discussed the effects of the dimensionless variables for the temperature variations and the total energy transferred from the spiral fin base.

Keywords: spiral fin, period, adomian decomposition method, nonlinear

Procedia PDF Downloads 523
5365 Soil Quality Response to Long-Term Intensive Resources Management and Soil Texture

Authors: Dalia Feiziene, Virginijus Feiza, Agne Putramentaite, Jonas Volungevicius, Kristina Amaleviciute, Sarunas Antanaitis

Abstract:

The investigations on soil conservation are one of the most important topics in modern agronomy. Soil management practices have great influence on soil physico-chemical quality and GHG emission. Research objective: To reveal the sensitivity and vitality of soils with different texture to long-term antropogenisation on Cambisol in Central Lithuania and to compare them with not antropogenised soil resources. Methods: Two long-term field experiments (loam on loam; sandy loam on loam) with different management intensity were estimated. Disturbed and undisturbed soil samples were collected from 5-10, 15-20 and 30-35 cm depths. Soil available P and K contents were determined by ammonium lactate extraction, total N by the dry combustion method, SOC content by Tyurin titrimetric (classical) method, texture by pipette method. In undisturbed core samples soil pore volume distribution, plant available water (PAW) content were determined. A closed chamber method was applied to quantify soil respiration (SR). Results: Long-term resources management changed soil quality. In soil with loam texture, within 0-10, 10-20 and 30-35 cm soil layers, significantly higher PAW, SOC and mesoporosity (MsP) were under no-tillage (NT) than under conventional tillage (CT). However, total porosity (TP) under NT was significantly higher only in 0-10 cm layer. MsP acted as dominant factor for N, P and K accumulation in adequate layers. P content in all soil layers was higher under NT than in CT. N and K contents were significantly higher than under CT only in 0-10 cm layer. In soil with sandy loam texture, significant increase in SOC, PAW, MsP, N, P and K under NT was only in 0-10 cm layer. TP under NT was significantly lower in all layers. PAW acted as strong dominant factor for N, P, K accumulation. The higher PAW the higher NPK contents were determined. NT did not secure chemical quality within deeper layers than CT. Long-term application of mineral fertilisers significantly increased SOC and soil NPK contents primarily in top-soil. Enlarged fertilization determined the significantly higher leaching of nutrients to deeper soil layers (CT) and increased hazards of top-soil pollution. Straw returning significantly increased SOC and NPK accumulation in top-soil. The SR on sandy loam was significantly higher than on loam. At dry weather conditions, on loam SR was higher in NT than in CT, on sandy loam SR was higher in CT than in NT. NPK fertilizers promoted significantly higher SR in both dry and wet year, but suppressed SR on sandy loam during usual year. Not antropogenised soil had similar SOC and NPK distribution within 0-35 cm layer and depended on genesis of soil profile horizons.

Keywords: fertilizers, long-term experiments, soil texture, soil tillage, straw

Procedia PDF Downloads 297
5364 Evaluation of the Effect of Magnetic Field on Fibroblast Attachment in Contact with PHB/Iron Oxide Nanocomposite

Authors: Shokooh Moghadam, Mohammad Taghi Khorasani, Sajjad Seifi Mofarah, M. Daliri

Abstract:

Through the recent two decades, the use of magnetic-property materials with the aim of target cell’s separation and eventually cancer treatment has incredibly increased. Numerous factors can alter the efficacy of this method on curing. In this project, the effect of magnetic field on adhesion of PDL and L929 cells on nanocomposite of iron oxide/PHB with different density of iron oxides (1%, 2.5%, 5%) has been studied. The nanocamposite mentioned includes a polymeric film of poly hydroxyl butyrate and γ-Fe2O3 particles with the average size of 25 nanometer dispersed in it and during this process, poly vinyl alcohol with 98% hydrolyzed and 78000 molecular weight was used as an emulsion to achieve uniform distribution. In order to get the homogenous film, the solution of PHB and iron oxide nanoparticles were put in a dry freezer and in liquid nitrogen, which resulted in a uniform porous scaffold and for removing porosities a 100◦C press was used. After the synthesis of a desirable nanocomposite film, many different tests were performed, First, the particles size and their distribution in the film were evaluated by transmission electron microscopy (TEM) and even FTIR analysis and DMTA test were run in order to observe and accredit the chemical connections and mechanical properties of nanocomposites respectively. By comparing the graphs of case and control samples, it was established that adding nano particles caused an increase in crystallization temperature and the more density of γ-Fe2O3 lead to more Tg (glass temperature). Furthermore, its dispersion range and dumping property of samples were raised up. Moreover, the toxicity, morphologic changes and adhesion of fibroblast and cancer cells were evaluated by a variety of tests. All samples were grown in different density and in contact with cells for 24 and 48 hours within the magnetic fields of 2×10^-3 Tesla. After 48 hours, the samples were photographed with an optic and SEM and no sign of toxicity was traced. The number of cancer cells in the case of sample group was fairly more than the control group. However, there are many gaps and unclear aspects to use magnetic field and their effects in cancer and all diseases treatments yet to be discovered, not to neglect that there have been prominent step on this way in these recent years and we hope this project can be at least a minimum movement in this issue.

Keywords: nanocomposite, cell attachment, magnetic field, cytotoxicity

Procedia PDF Downloads 253
5363 Survey Paper on Graph Coloring Problem and Its Application

Authors: Prateek Chharia, Biswa Bhusan Ghosh

Abstract:

Graph coloring is one of the prominent concepts in graph coloring. It can be defined as a coloring of the various regions of the graph such that all the constraints are fulfilled. In this paper various graphs coloring approaches like greedy coloring, Heuristic search for maximum independent set and graph coloring using edge table is described. Graph coloring can be used in various real time applications like student time tabling generation, Sudoku as a graph coloring problem, GSM phone network.

Keywords: graph coloring, greedy coloring, heuristic search, edge table, sudoku as a graph coloring problem

Procedia PDF Downloads 536
5362 The Impact of an Improved Strategic Partnership Programme on Organisational Performance and Growth of Firms in the Internet Protocol Television and Hybrid Fibre-Coaxial Broadband Industry

Authors: Collen T. Masilo, Brane Semolic, Pieter Steyn

Abstract:

The Internet Protocol Television (IPTV) and Hybrid Fibre-Coaxial (HFC) Broadband industrial sector landscape are rapidly changing and organisations within the industry need to stay competitive by exploring new business models so that they can be able to offer new services and products to customers. The business challenge in this industrial sector is meeting or exceeding high customer expectations across multiple content delivery modes. The increasing challenges in the IPTV and HFC broadband industrial sector encourage service providers to form strategic partnerships with key suppliers, marketing partners, advertisers, and technology partners. The need to form enterprise collaborative networks poses a challenge for any organisation in this sector, in selecting the right strategic partners who will ensure that the organisation’s services and products are marketed in new markets. Partners who will ensure that customers are efficiently supported by meeting and exceeding their expectations. Lastly, selecting cooperation partners who will represent the organisation in a positive manner, and contribute to improving the performance of the organisation. Companies in the IPTV and HFC broadband industrial sector tend to form informal partnerships with suppliers, vendors, system integrators and technology partners. Generally, partnerships are formed without thorough analysis of the real reason a company is forming collaborations, without proper evaluations of prospective partners using specific selection criteria, and with ineffective performance monitoring of partners to ensure that a firm gains real long term benefits from its partners and gains competitive advantage. Similar tendencies are illustrated in the research case study and are based on Skyline Communications, a global leader in end-to-end, multi-vendor network management and operational support systems (OSS) solutions. The organisation’s flagship product is the DataMiner network management platform used by many operators across multiple industries and can be referred to as a smart system that intelligently manages complex technology ecosystems for its customers in the IPTV and HFC broadband industry. The approach of the research is to develop the most efficient business model that can be deployed to improve a strategic partnership programme in order to significantly improve the performance and growth of organisations participating in a collaborative network in the IPTV and HFC broadband industrial sector. This involves proposing and implementing a new strategic partnership model and its main features within the industry which should bring about significant benefits for all involved companies to achieve value add and an optimal growth strategy. The proposed business model has been developed based on the research of existing relationships, value chains and business requirements in this industrial sector and validated in 'Skyline Communications'. The outputs of the business model have been demonstrated and evaluated in the research business case study the IPTV and HFC broadband service provider 'Skyline Communications'.

Keywords: growth, partnership, selection criteria, value chain

Procedia PDF Downloads 129
5361 Optimization of Black-Litterman Model for Portfolio Assets Allocation

Authors: A. Hidalgo, A. Desportes, E. Bonin, A. Kadaoui, T. Bouaricha

Abstract:

Present paper is concerned with portfolio management with Black-Litterman (B-L) model. Considered stocks are exclusively limited to large companies stocks on US market. Results obtained by application of the model are presented. From analysis of collected Dow Jones stock data, remarkable explicit analytical expression of optimal B-L parameter τ, which scales dispersion of normal distribution of assets mean return, is proposed in terms of standard deviation of covariance matrix. Implementation has been developed in Matlab environment to split optimization in Markovitz sense from specific elements related to B-L representation.

Keywords: Black-Litterman, Markowitz, market data, portfolio manager opinion

Procedia PDF Downloads 256
5360 Structural Breaks, Asymmetric Effects and Long Memory in the Volatility of Turkey Stock Market

Authors: Serpil Türkyılmaz, Mesut Balıbey

Abstract:

In this study, long memory properties in volatility of Turkey Stock Market are being examined through the FIGARCH, FIEGARCH and FIAPARCH models under different distribution assumptions as normal and skewed student-t distributions. Furthermore, structural changes in volatility of Turkey Stock Market are investigated. The results display long memory property and the presence of asymmetric effects of shocks in volatility of Turkey Stock Market.

Keywords: FIAPARCH model, FIEGARCH model, FIGARCH model, structural break

Procedia PDF Downloads 290
5359 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 90
5358 Preparation of Nano-Sized Samarium-Doped Yttrium Aluminum Garnet

Authors: M. Tabatabaee, N. Binavayan, M. R. Nateghi

Abstract:

In this research nano-size of yttrium aluminum garnet (YAG) containing lanthanide metals was synthesized by the sol-gel method in presente citric acid as a complexing agent. Samarium (III) was used to synthesis of YAG:M3+. The prepared powders were characterized by powder X-ray diffraction (PXRD). The size distribution and morphology of the samples were analyzed by scanning electron microscopy (SEM). XRD results show that Sm, La, and ce doped YAG crystallizes in the cubic system and additional peaks compared to pure YAG can be assigned to the presence of Sm in the synthesize YAG. The SEM images show possess spherical nano-sized particle with average 50 nm in diameter.

Keywords: citric acid, nano particle, samarium, yttrium aluminum garnet

Procedia PDF Downloads 302
5357 A Modified Shannon Entropy Measure for Improved Image Segmentation

Authors: Mohammad A. U. Khan, Omar A. Kittaneh, M. Akbar, Tariq M. Khan, Husam A. Bayoud

Abstract:

The Shannon Entropy measure has been widely used for measuring uncertainty. However, in partial settings, the histogram is used to estimate the underlying distribution. The histogram is dependent on the number of bins used. In this paper, a modification is proposed that makes the Shannon entropy based on histogram consistent. For providing the benefits, two application are picked in medical image processing applications. The simulations are carried out to show the superiority of this modified measure for image segmentation problem. The improvement may be contributed to robustness shown to uneven background in images.

Keywords: Shannon entropy, medical image processing, image segmentation, modification

Procedia PDF Downloads 492
5356 Political Communication in Twitter Interactions between Government, News Media and Citizens in Mexico

Authors: Jorge Cortés, Alejandra Martínez, Carlos Pérez, Anaid Simón

Abstract:

The presence of government, news media, and general citizenry in social media allows considering interactions between them as a form of political communication (i.e. the public exchange of contradictory discourses about politics). Twitter’s asymmetrical following model (users can follow, mention or reply to other users that do not follow them) could foster alternative democratic practices and have an impact on Mexican political culture, which has been marked by a lack of direct communication channels between these actors. The research aim is to assess Twitter’s role in political communication practices through the analysis of interaction dynamics between government, news media, and citizens by extracting and visualizing data from Twitter’s API to observe general behavior patterns. The hypothesis is that regardless the fact that Twitter’s features enable direct and horizontal interactions between actors, users repeat traditional dynamics of interaction, without taking full advantage of the possibilities of this medium. Through an interdisciplinary team including Communication Strategies, Information Design, and Interaction Systems, the activity on Twitter generated by the controversy over the presence of Uber in Mexico City was analysed; an issue of public interest, involving aspects such as public opinion, economic interests and a legal dimension. This research includes techniques from social network analysis (SNA), a methodological approach focused on the comprehension of the relationships between actors through the visual representation and measurement of network characteristics. The analysis of the Uber event comprised data extraction, data categorization, corpus construction, corpus visualization and analysis. On the recovery stage TAGS, a Google Sheet template, was used to extract tweets that included the hashtags #UberSeQueda and #UberSeVa, posts containing the string Uber and tweets directed to @uber_mx. Using scripts written in Python, the data was filtered, discarding tweets with no interaction (replies, retweets or mentions) and locations outside of México. Considerations regarding bots and the omission of anecdotal posts were also taken into account. The utility of graphs to observe interactions of political communication in general was confirmed by the analysis of visualizations generated with programs such as Gephi and NodeXL. However, some aspects require improvements to obtain more useful visual representations for this type of research. For example, link¬crossings complicates following the direction of an interaction forcing users to manipulate the graph to see it clearly. It was concluded that some practices prevalent in political communication in Mexico are replicated in Twitter. Media actors tend to group together instead of interact with others. The political system tends to tweet as an advertising strategy rather than to generate dialogue. However, some actors were identified as bridges establishing communication between the three spheres, generating a more democratic exercise and taking advantage of Twitter’s possibilities. Although interactions in Twitter could become an alternative to political communication, this potential depends on the intentions of the participants and to what extent they are aiming for collaborative and direct communications. Further research is needed to get a deeper understanding on the political behavior of Twitter users and the possibilities of SNA for its analysis.

Keywords: interaction, political communication, social network analysis, Twitter

Procedia PDF Downloads 219
5355 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 182
5354 Orthogonal Basis Extreme Learning Algorithm and Function Approximation

Authors: Ying Li, Yan Li

Abstract:

A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.

Keywords: neural network, orthogonal basis extreme learning, function approximation

Procedia PDF Downloads 530
5353 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 112
5352 Effects of Level Densities and Those of a-Parameter in the Framework of Preequilibrium Model for 63,65Cu(n,xp) Reactions in Neutrons at 9 to 15 MeV

Authors: L. Yettou

Abstract:

In this study, the calculations of proton emission spectra produced by 63Cu(n,xp) and 65Cu(n,xp) reactions are used in the framework of preequilibrium models using the EMPIRE code and TALYS code. Exciton Model predidtions combined with the Kalbach angular distribution systematics and the Hybrid Monte Carlo Simulation (HMS) were used. The effects of levels densities and those of a-parameter have been investigated for our calculations. The comparison with experimental data shows clear improvement over the Exciton Model and HMS calculations.

Keywords: Preequilibrium models , level density, level density a-parameter., Empire code, Talys code.

Procedia PDF Downloads 130
5351 Knowledge Development: How New Information System Technologies Affect Knowledge Development

Authors: Yener Ekiz

Abstract:

Knowledge development is a proactive process that covers collection, analysis, storage and distribution of information that helps to contribute the understanding of the environment. To transfer knowledge correctly and fastly, you have to use new emerging information system technologies. Actionable knowledge is only of value if it is understandable and usable by target users. The purpose of the paper is to enlighten how technology eases and affects the process of knowledge development. While preparing the paper, literature review, survey and interview methodology will be used. The hypothesis is that the technology and knowledge development are inseparable and the technology will formalize the DIKW hierarchy again. As a result, today there is huge data. This data must be classified sharply and quickly.

Keywords: DIKW hierarchy, knowledge development, technology

Procedia PDF Downloads 434
5350 Determination of the Walkability Comfort for Urban Green Space Using Geographical Information System

Authors: Muge Unal, Cengiz Uslu, Mehmet Faruk Altunkasa

Abstract:

Walkability relates to the ability of the places to connect people with varied destinations within a reasonable amount of time and effort, and to offer visual interest in journeys throughout the network. So, the good quality of the physical environment and arrangement of walkway and sidewalk appear to be more crucial in influencing the pedestrian route choice. Also, proximity, connectivity, and accessibility are significant factor for walkability in terms of an equal opportunity for using public spaces. As a result, there are two important points for walkability. Firstly, the place should have a well-planned street network for accessible and secondly facilitate the pedestrian need for comfort. In this respect, this study aims to examine the both physical and bioclimatic comfort levels of the current condition of pedestrian route with reference to design criteria of a street to access the urban green spaces. These aspects have been identified as the main indicators for walkable streets such as continuity, materials, slope, bioclimatic condition, walkway width, greenery, and surface. Additionally, the aim was to identify the factors that need to be considered in future guidelines and policies for planning and design in urban spaces especially streets. Adana city was chosen as a study area. Adana is a province of Turkey located in south-central Anatolia. This study workflow can be summarized in four stages: (1) environmental and physical data were collected by referred to literature and used in a weighted criteria method to determine the importance level of these data , (2) environmental characteristics of pedestrian routes gained from survey studies are evaluated to hierarchies these criteria of the collected information, (3) and then each pedestrian routes will have a score that provides comfortable access to the park, (4) finally, the comfortable routes to park will be mapped using GIS. It is hoped that this study will provide an insight into future development planning and design to create a friendly and more comfort street environment for the users.

Keywords: comfort level, geographical information system (GIS), walkability, weighted criteria method

Procedia PDF Downloads 307
5349 Antioxidant, Hypoglycemic and Hypotensive Effects Affected by Various Molecular Weights of Cold Water Extract from Pleurotus Citrinopileatus

Authors: Pao-Huei Chen, Shu-Mei Lin, Yih-Ming Weng, Zer-Ran Yu, Be-Jen Wang

Abstract:

Pancreatic α-amylase and intestinal α-glucosidase are the critical enzymes for the breakdown of complex carbohydrates into di- or mono-saccharide, which play an important role in modulating postprandial blood sugars. Angiotensin converting enzyme (ACE) converts inactive angiotensin-I into active angiotensin-II, which subsequently increase blood pressure through triggering vasoconstriction and aldosterone secretion. Thus, inhibition of carbohydrate-digestion enzymes and ACE will help the management of blood glucose and blood pressure, respectively. Studies showed Pleurotus citrinopileatus (PC), an edible mushroom and commonly cultured in oriental countries, exerted anticancer, immune improving, antioxidative, hypoglycemic and hypolipidemic effects. Previous studies also showed various molecular weights (MW) fractioned from extracts may affect biological activities due to varying contents of bioactive components. Thus, the objective of this study is to investigate the in vitro antioxidant, hypoglycemic and hypotenstive effects and distribution of active compounds of various MWs of cold water extract from P. citrinopileatus (CWEPC). CWEPC was fractioned into four various MW fractions, PC-I (<1 kDa), PC-II (1-3.5 kDa), PC-III (3.5-10 kDa), and PC-IV (>10 kDa), using an ultrafiltration system. The physiological activities, including antioxidant activities, the inhibition capabilities of pancreatic α-amylase, intestinal α-glucosidase, and hypertension-linked ACE, and the active components, including polysaccharides, protein, and phenolic contents, of CWEPC and four fractions were determined. The results showed that fractions with lower MW exerted a higher antioxidant activity (p<0.05), which was positively correlated to the levels of total phenols. In contrast, the inhibition effects on the activities of α-amylase, α-glucosidase, and ACE of PC-IV fraction were significantly higher than CWEPC and the other three low MW fractions (< 10 kDa), which was more related to protein contents. The inhibition capability of CWEPC and PC-IV on α-amylase activity was 1/13.4 to 1/2.7 relative to that of acarbose (positive control), respectively. However, the inhibitory ability of PC-IV on α-glucosidase (IC50 = 0.5 mg/mL) was significantly higher than acarbose (IC50 = 1.7 mg/mL). Kinetic data revealed that PC-IV fraction followed a non-competitive inhibition on α-glucosidase activity. In conclusion, the distribution of various bioactive components contribute to the functions of different MW fractions on oxidative stress prevention, and blood pressure and glucose modulation.

Keywords: α-Amylase, angiotensin converting enzyme, α-Glucosidase, Pleurotus citrinopileatus

Procedia PDF Downloads 457
5348 Static Modeling of the Delamination of a Composite Material Laminate in Mode II

Authors: Y. Madani, H. Achache, B. Boutabout

Abstract:

The purpose of this paper is to analyze numerically by the three-dimensional finite element method, using ABAQUS calculation code, the mechanical behavior of a unidirectional and multidirectional delaminated stratified composite under mechanical loading in Mode II. This study consists of the determination of the energy release rate G in mode II as well as the distribution of equivalent von Mises stresses along the damaged zone by varying several parameters such as the applied load and the delamination length. It allowed us to deduce that the high energy release rate favors delamination at the free edges of a stratified plate subjected to bending.

Keywords: delamination, energy release rate, finite element method, stratified composite

Procedia PDF Downloads 172