Search results for: open endogenous growth models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14898

Search results for: open endogenous growth models

288 TNF Modulation of Cancer Stem Cells in Renal Clear Cell Carcinoma

Authors: Rafia S. Al-lamki, Jun Wang, Simon Pacey, Jordan Pober, John R. Bradley

Abstract:

Tumor necrosis factor alpha (TNF), signaling through TNFR2, may act an autocrine growth factor for renal tubular epithelial cells. Clear cell renal carcinomas (ccRCC) contain cancer stem cells (CSCs) that give rise to progeny which form the bulk of the tumor. CSCs are rarely in cell cycle and, as non-proliferating cells, resist most chemotherapeutic agents. Thus, recurrence after chemotherapy may result from the survival of CSCs. Therapeutic targeting of both CSCs and the more differentiated bulk tumor populations may provide a more effective strategy for treatment of RCC. In this study, we hypothesized that TNFR2 signaling will induce CSCs in ccRCC to enter cell cycle so that treatment with ligands that engage TNFR2 will render CSCs susceptible to chemotherapy. To test this hypothesis, we have utilized wild-type TNF (wtTNF) or specific muteins selective for TNFR1 (R1TNF) or TNFR2 (R2TNF) to treat either short-term organ cultures of ccRCC and adjacent normal kidney (NK) tissue or cultures of CD133+ cells isolated from ccRCC and adjacent NK, hereafter referred to as stem cell-like cells (SCLCs). The effect of cyclophosphamide (CP), currently an effective anticancer agent, was tested on CD133+SCLCs from ccRCC and NK before and after R2TNF treatment. Responses to TNF were assessed by flow cytometry (FACS), immunofluorescence, and quantitative real-time PCR, TUNEL, and cell viability assays. Cytotoxic effect of CP was analyzed by Annexin V and propidium iodide staining with FACS. In addition, we assessed the effect of TNF on isolated SCLCs differentiation using a three-dimensional (3D) culture system. Clinical samples of ccRCC contain a greater number SCLCs compared to NK and the number of SCSC increases with higher tumor grade. Isolated SCLCs show expression of stemness markers (oct4, Nanog, Sox2, Lin28) but not differentiation markers (cytokeratin, CD31, CD45, and EpCAM). In ccRCC organ cultures, wtTNF and R2TNF increase CD133 and TNFR2 expression and promote cell cycle entry whereas wtTNF and R1TNF increase TNFR1 expression and promote cell death of SCLCs. Similar findings are observed in SCLCs isolated from NK but the effect was greater in SCLCs isolated from ccRCC. Application of CP distinctly triggered apoptotic and necrotic cell death in SLCSs pre-treatment with R2TNF as compared to CP treatment alone, with SCLCs from ccRCC more sensitive to CP compared to SLCS from NK. Furthermore, TNF promotes differentiation of SCLCs to an epithelial phenotype in 3D cultures, confirmed by cytokeratin expression and loss of stemness markers Nanog and Sox2. The differentiated cells show positive expression of TNF and TNFR2. These findings provide evidence that selective engagement of TNFR2 drive CSCs to cell proliferation/differentiation, and targeting of cycling cells with TNFR2 agonist in combination with anti-cancer agents may be a potential therapy for RCC.

Keywords: cancer stem cells, ccRCC, cell cycle, cell death, TNF, TNFR1, TNFR2, CD133

Procedia PDF Downloads 242
287 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 50
286 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters

Authors: Jyoti Sahu, Vinay A. Juvekar

Abstract:

Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.

Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature

Procedia PDF Downloads 360
285 Promoting Compassionate Communication in a Multidisciplinary Fellowship: Results from a Pilot Evaluation

Authors: Evonne Kaplan-Liss, Val Lantz-Gefroh

Abstract:

Arts and humanities are often incorporated into medical education to help deepen understanding of the human condition and the ability to communicate from a place of compassion. However, a gap remains in our knowledge of compassionate communication training for postgraduate medical professionals (as opposed to students and residents); how training opportunities include and impact the artists themselves, and how train-the-trainer models can support learners to become teachers. In this report, the authors present results from a pilot evaluation of the UC San Diego Health: Sanford Compassionate Communication Fellowship, a 60-hour experiential program that uses theater, narrative reflection, poetry, literature, and journalism techniques to train a multidisciplinary cohort of medical professionals and artists in compassionate communication. In the culminating project, fellows design and implement their own projects as teachers of compassionate communication in their respective workplaces. Qualitative methods, including field notes and 30-minute Zoom interviews with each fellow, were used to evaluate the impact of the fellowship. The cohort included both artists (n=2) and physicians representing a range of specialties (n=7), such as occupational medicine, palliative care, and pediatrics. The authors coded the data using thematic analysis for evidence of how the multidisciplinary nature of the fellowship impacted the fellows’ experiences. The findings show that the multidisciplinary cohort contributed to a greater appreciation of compassionate communication in general. Fellows expressed that the ability to witness how those in different fields approached compassionate communication enhanced their learning and helped them see how compassion can be expressed in various contexts, which was both “exhilarating” and “humbling.” One physician expressed that the fellowship has been “really helpful to broaden my perspective on the value of good communication.” Fellows shared how what they learned in the fellowship translated to increased compassionate communication, not only in their professional roles but in their personal lives as well. A second finding was the development of a supportive community. Because each fellow brought their own experiences and expertise, there was a sense of genuine ability to contribute as well as a desire to learn from others. A “brave space” was created by the fellowship facilitators and the inclusion of arts-based activities: a space that invited vulnerability and welcomed fellows to make their own meaning without prescribing any one answer or right way to approach compassionate communication. This brave space contributed to a strong connection among the fellows and reports of increased well-being, as well as multiple collaborations post-fellowship to carry forward compassionate communication training at their places of work. Results show initial evidence of the value of a multidisciplinary fellowship for promoting compassionate communication for both artists and physicians. The next steps include maintaining the supportive fellowship community and collaborations with a post-fellowship affiliate faculty program; scaling up the fellowship with non-physicians (e.g., nurses and physician assistants); and collecting data from family members, colleagues, and patients to understand how the fellowship may be creating a ripple effect outside of the fellowship through fellows’ compassionate communication.

Keywords: compassionate communication, communication in healthcare, multidisciplinary learning, arts in medicine

Procedia PDF Downloads 42
284 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 122
283 Mapping the State of the Art of European Companies Doing Social Business at the Base of the Economic Pyramid as an Advanced Form of Strategic Corporate Social Responsibility

Authors: Claudio Di Benedetto, Irene Bengo

Abstract:

The objective of the paper is to study how large European companies develop social business (SB) at the base of the economic pyramid (BoP). BoP markets are defined as the four billions people living with an annual income below $3,260 in local purchasing power. Despite they are heterogeneous in terms of geographic range they present some common characteristics: the presence of significant unmet (social) needs, high level of informal economy and the so-called ‘poverty penalty’. As a result, most people living at BoP are excluded from the value created by the global market economy. But it is worth noting, that BoP population with an aggregate purchasing power of around $5 trillion a year, represent a huge opportunity for companies that want to enhance their long-term profitability perspective. We suggest that in this context, the development of SB is, for companies, an innovative and promising way to satisfy unmet social needs and to experience new forms of value creation. Indeed, SB can be considered a strategic model to develop CSR programs that fully integrate the social dimension into the business to create economic and social value simultaneously. Despite in literature many studies have been conducted on social business, only few have explicitly analyzed such phenomenon from a company perspective and their role in the development of such initiatives remains understudied with fragmented results. To fill this gap the paper analyzes the key characteristics of the social business initiatives developed by European companies at BoP. The study was performed analyzing 1475 European companies participating in the United Nation Global Compact, the world’s leading corporate social responsibility program. Through the analysis of the corporate websites the study identifies companies that actually do SB at BoP. For SB initiatives identified, information were collected according to a framework adapted from the SB model developed by preliminary results show that more than one hundred European companies have already implemented social businesses at BoP accounting for the 6,5% of the total. This percentage increases to 15% if the focus is on companies with more than 10.440 employees. In terms of geographic distribution 80% of companies doing SB at BoP are located in western and southern Europe. The companies more active in promoting SB belong to financial sector (20%), energy sector (17%) and food and beverage sector (12%). In terms of social needs addressed almost 30% of the companies develop SB to provide access to energy and WASH, 25% of companies develop SB to reduce local unemployment or to promote local entrepreneurship and 21% of companies develop SB to promote financial inclusion of poor. In developing SB companies implement different social business configurations ranging from forms of outsourcing to internal development models. The study identifies seven main configurations through which company develops social business and each configuration present distinguishing characteristics respect to the involvement of the company in the management, the resources provided and the benefits achieved. By performing different analysis on data collected the paper provides detailed insights on how European companies develop SB at BoP.

Keywords: base of the economic pyramid, corporate social responsibility, social business, social enterprise

Procedia PDF Downloads 202
282 Exploring 3-D Virtual Art Spaces: Engaging Student Communities Through Feedback and Exhibitions

Authors: Zena Tredinnick-Kirby, Anna Divinsky, Brendan Berthold, Nicole Cingolani

Abstract:

Faculty members from The Pennsylvania State University, Zena Tredinnick-Kirby, Ph.D., and Anna Divinsky are at the forefront of an innovative educational approach to improve access in asynchronous online art courses. Their pioneering work weaves virtual reality (VR) technologies to construct a more equitable educational experience for students by transforming their learning and engagement. The significance of their study lies in the need to bridge the digital divide in online art courses, making them more inclusive and interactive for all distance learners. In an era where conventional classroom settings are no longer the sole means of instruction, Tredinnick-Kirby and Divinsky harness the power of instructional technologies to break down geographical barriers by incorporating an interactive VR experience that facilitates community building within an online environment transcending physical constraints. The methodology adopted by Tredinnick-Kirby, and Divinsky is centered around integrating 3D virtual spaces into their art courses. Spatial.io, a virtual world platform, enables students to develop digital avatars and engage in virtual art museums through a free browser-based program or an Oculus headset, where they can interact with other visitors and critique each other’s artwork. The goal is not only to provide students with an engaging and immersive learning experience but also to nourish them with a more profound understanding of the language of art criticism and technology. Furthermore, the study aims to cultivate critical thinking skills among students and foster a collaborative spirit. By leveraging cutting-edge VR technology, students are encouraged to explore the possibilities of their field, experimenting with innovative tools and techniques. This approach not only enriches their learning experience but also prepares them for a dynamic and ever-evolving art landscape in technology and education. One of the fundamental objectives of Tredinnick-Kirby and Divinsky is to remodel how feedback is derived through peer-to-peer art critique. Through the inclusion of 3D virtual spaces into the curriculum, students now have the opportunity to install their final artwork in a virtual gallery space and incorporate peer feedback, enabling students to exhibit their work opening the doors to a collaborative and interactive process. Students can provide constructive suggestions, engage in discussions, and integrate peer commentary into developing their ideas and praxis. This approach not only accelerates the learning process but also promotes a sense of community and growth. In summary, the study conducted by the Penn State faculty members Zena Tredinnick-Kirby, and Anna Divinsky represents innovative use of technology in their courses. By incorporating 3D virtual spaces, they are enriching the learners' experience. Through this inventive pedagogical technique, they nurture critical thinking, collaboration, and the practical application of cutting-edge technology in art. This research holds great promise for the future of online art education, transforming it into a dynamic, inclusive, and interactive experience that transcends the confines of distance learning.

Keywords: Art, community building, distance learning, virtual reality

Procedia PDF Downloads 45
281 Thai Cane Farmers' Responses to Sugar Policy Reforms: An Intentions Survey

Authors: Savita Tangwongkit, Chittur S Srinivasan, Philip J. Jones

Abstract:

Thailand has become the world’s fourth largest sugarcane producer and second largest sugar exporter. While there have been a number of drivers of this growth, the primary driver has been wide-ranging government support measures. Recently, the Thai government has emphasized the need for policy reform as part of a broader industry restructuring to bring the sector up-to-date with the current and future developments in the international sugar market. Because of the sectors historical dependence on government support, any such reform is likely to have a very significant impact on the fortunes of Thai cane farmers. This study explores the impact of three policy scenarios, representing a spectrum of policy approaches, on Thai cane producers. These reform scenarios were designed in consultation with policy makers and academics working in the cane sector. Scenario 1 captures the current ‘government proposal’ for policy reform. This scenario removes certain domestic production subsidies but seeks to maintain as much support as is permissible under current WTO rules. The second scenario, ‘protectionism’, maintains the current internal market producer supports, but otherwise complies with international (WTO) commitments. Third, the ‘libertarian scenario’ removes all production support and market interventions, trade and domestic consumption distortions. Most important driver of producer behaviour in all of the scenarios is the producer price of cane. Cane price is obviously highest under the protectionism scenario, followed by government proposal and libertarian scenarios, respectively. Likely producer responses to these three policy scenarios was determined by means of a large-scale survey of cane farmers. The sample was stratified by size group and quotas filled by size group and region. One scenario was presented to each of three sub-samples, consisting of approx.150 farmers. Total sample size was 462 farms. Data was collected by face-to-face interview between June and August 2019. There was a marked difference in farmer response to the three scenarios. Farmers in the ‘Protectionism’ scenario, which maintains the highest cane price and those who farm larger cane areas are more likely to continue cane farming. The libertarian scenario is likely to result in the greatest losses in terms of cane production volume broadly double that of the ‘protectionism’ scenario, primarily due to farmers quitting cane production altogether. Over half of loss cane production volume comes from medium-size farm, i.e. the largest and smallest producers are the most resilient. This result is likely due to the fact that the medium size group are large enough to require hired labour but lack the economies of scale of the largest farms. Over all size groups the farms most heavily specialized in cane production, i.e. those devoting 26-50% of arable land to cane, are also the most vulnerable, with 70% of all farmers quitting cane production coming from this group. This investigation suggests that cane price is the most significant determinant of farmer behaviour. Also, that where scenarios drive significantly lower cane price, policy makers should target support towards mid-sized producers, with policies that encourage efficiency gains and diversification into alternative agricultural crops.

Keywords: farmer intentions, farm survey, policy reform, Thai cane production

Procedia PDF Downloads 91
280 The Applications of Zero Water Discharge (ZWD) Systems for Environmental Management

Authors: Walter W. Loo

Abstract:

China declared the “zero discharge rules which leave no toxics into our living environment and deliver blue sky, green land and clean water to many generations to come”. The achievement of ZWD will provide conservation of water, soil and energy and provide drastic increase in Gross Domestic Products (GDP). Our society’s engine needs a major tune up; it is sputtering. ZWD is achieved in world’s space stations – no toxic air emission and the water is totally recycled and solid wastes all come back to earth. This is all done with solar power. These are all achieved under extreme temperature, pressure and zero gravity in space. ZWD can be achieved on earth under much less fluctuations in temperature, pressure and normal gravity environment. ZWD systems are not expensive and will have multiple beneficial returns on investment which are both financially and environmentally acceptable. The paper will include successful case histories since the mid-1970s. ZWD discharge can be applied to the following types of projects: nuclear and coal fire power plants with a closed loop system that will eliminate thermal water discharge; residential communities with wastewater treatment sump and recycle the water use as a secondary water supply; waste water treatment Plants with complete water recycling including water distillation to produce distilled water by very economical 24-hours solar power plant. Landfill remediation is based on neutralization of landfilled gas odor and preventing anaerobic leachate formation. It is an aerobic condition which will render landfill gas emission explosion proof. Desert development is the development of recovering soil moisture from soil and completing a closed loop water cycle by solar energy within and underneath an enclosed greenhouse. Salt-alkali land development can be achieved by solar distillation of salty shallow water into distilled water. The distilled water can be used for soil washing and irrigation and complete a closed loop water cycle with energy and water conservation. Heavy metals remediation can be achieved by precipitation of dissolved toxic metals below the plant or vegetation root zone by solar electricity without pumping and treating. Soil and groundwater remediation - abandoned refineries, chemical and pesticide factories can be remediated by in-situ electrobiochemical and bioventing treatment method without pumping or excavation. Toxic organic chemicals are oxidized into carbon dioxide and heavy metals precipitated below plant and vegetation root zone. New water sources: low temperature distilled water can be recycled for repeated use within a greenhouse environment by solar distillation; nano bubble water can be made from the distilled water with nano bubbles of oxygen, nitrogen and carbon dioxide from air (fertilizer water) and also eliminate the use of pesticides because the nano oxygen will break the insect growth chain in the larvae state. Three dimensional high yield greenhouses can be constructed by complete water recycling using the vadose zone soil as a filter with no farming wastewater discharge.

Keywords: greenhouses, no discharge, remediation of soil and water, wastewater

Procedia PDF Downloads 318
279 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes

Authors: Madushani Rodrigo, Banuka Athuraliya

Abstract:

In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.

Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16

Procedia PDF Downloads 47
278 Effect of Chitosan Oligosaccharide from Tenebrio Molitor on Prebiotics

Authors: Hyemi Kim, Jay Kim, Kyunghoon Han, Ra-Yeong Choi, In-Woo Kim, Hyung Joo Suh, Ki-Bae Hong, Sung Hee Han

Abstract:

Chitosan is used in various industries such as food and medical care because it is known to have various functions such as anti-obesity, anti-inflammatory and anti-cancer benefits. Most of the commercial chitosan is extracted from crustaceans. As the harvest rate of snow crabs and red snow crabs decreases and safety issues arise due to environmental pollution, research is underway to extract chitosan from insects. In this study, we used Response Surface Methodology (RSM) to predict the optimal conditions to produce chitosan oligosaccharides from mealworms (MCOS), which can be absorbed through the intestine as low-molecular-weight chitosan. The experimentally confirmed optimal conditions for MCOS production using chitosanase were found to be a substrate concentration of 2.5%, enzyme addition of 30 mg/g and a reaction time of 6 hours. The chemical structure and physicochemical properties of the produced MCOS were measured using MALDI-TOF mass spectra and FTIR spectra. The MALDI-TOF mass spectra revealed peaks corresponding to the dimer (375.045), trimer (525.214), tetramer (693.243), pentamer (826.296), and hexamer (987.360). In the FTIR spectra, commercial chitosan oligosaccharides exhibited a weak peak pattern at 3500-2500 cm-1, unlike chitosan or chitosan oligosaccharides. There was a difference in the peak at 3200~3500 cm-1, where different vibrations corresponding to OH and amine groups overlapped. Chitosan, chitosan oligosaccharide, and commercial chitosan oligosaccharide showed peaks at 2849, 2884, and 2885 cm-1, respectively, attributed to the absorption of the C-H stretching vibration of methyl or methine. The amide I, amide II, and amide III bands of chitosan, chitosan oligosaccharide, and commercial chitosan oligosaccharide exhibited peaks at 1620/1620/1602, 1553/1555/1505, and 1310/1309/1317 cm-1, respectively. Furthermore, the solubility of MCOS was 45.15±3.43, water binding capacity (WBC) was 299.25±4.57, and fat binding capacity (FBC) was 325.61±2.28 and the solubility of commercial chitosan oligosaccharides was 49.04±9.52, WBC was 280.55±0.50, and FBC was 157.22±18.15. Thus, the characteristics of MCOS and commercial chitosan oligosaccharides are similar. The results of investigating the impact of chitosan oligosaccharide on the proliferation of probiotics revealed increased growth in L. casei, L. acidophilus, and Bif. Bifidum. Therefore, the major short-chain fatty acids produced by gut microorganisms, such as acetic acid, propionic acid, and butyric acid, increased within 24 hours of adding 1% (p<0.01) and 2% (p<0.001) MCOS. The impact of MCOS on the overall gut microbiota was assessed, revealing that the Chao1 index did not show significant differences, but the Simpson index decreased in a concentration-dependent manner, indicating a higher species diversity. The addition of MCOS resulted in changes in the overall microbial composition, with an increase in Firmicutes and Verrucomicrobia (p<0.05) compared to the control group, while Proteobacteria and Actinobacteria (p<0.05) decreased. At the genus level, changes in microbiota due to MCOS supplementation showed an increase in beneficial bacteria like lactobacillus, Romboutsia, Turicibacter, and Akkermansia (p<0.0001) while harmful bacteria like Enterococcus, Morganella, Proterus, and Bacteroides (p<0.0001) decreased. In this study, chitosan oligosaccharides were successfully produced under established conditions from mealworms, and these chitosan oligosaccharides are expected to have prebiotic effects, similar to those obtained from crabs.

Keywords: mealworms, chitosan, chitosan oligosaccharide, prebiotics

Procedia PDF Downloads 38
277 Effect of Salinity and Heavy Metal Toxicity on Gene Expression, and Morphological Characteristics in Stevia rebaudiana Plants

Authors: Umara Nissar Rafiqi, Irum Gul, Nazima Nasrullah, Monica Saifi, Malik Z. Abdin

Abstract:

Background: Stevia rebaudiana, a member of Asteraceae family is an important medicinal plant and produces a commercially used non-caloric natural sweetener, which is also an alternate herbal cure for diabetes. Steviol glycosides are the main sweetening compounds present in these plants. Secondary metabolites are crucial to the adaption of plants to the environment and its overcoming stress conditions. In agricultural procedures, the abiotic stresses like salinity, high metal toxicity and drought, in particular, are responsible for the majority of the reduction that differentiates yield potential from harvestable yield. Salt stress and heavy metal toxicity lead to increased production of reactive oxygen species (ROS). To avoid oxidative damage due to ROS and osmotic stress, plants have a system of anti-oxidant enzymes along with several stress induced enzymes. This helps in scavenging the ROS and relieve the osmotic stress in different cell compartments. However, whether stress induced toxicity modulates the activity of these enzymes in Stevia rebaudiana is poorly understood. Aim: The present study focussed on the effect of salinity, heavy metal toxicity (lead and mercury) on physiological traits and transcriptional profiling of Stevia rebaudiana. Method: Stevia rebaudiana plants were collected from the Central Institute of Medicinal and Aromatic plants (CIMAP), Patnagar, India and maintained under controlled conditions in a greenhouse at Hamdard University, Delhi, India. The plants were subjected to different concentrations of salt (0, 25, 50 and 75 mM respectively) and heavy metals, lead and mercury (0, 100, 200 and 300 µM respectively). The physiological traits such as shoot length, root numbers, leaf growth were evaluated. The samples were collected at different developmental stages and analysed for transcription profiling by RT-PCR. Transcriptional studies in stevia rebaudiana involves important antioxidant enzymes: catalase (CAT), superoxide dismutase (SOD), cytochrome P450 monooxygenase (CYP) and stress induced aquaporin (AQU), auxin repressed protein (ARP-1), Ndhc gene. The data was analysed using GraphPad Prism and expressed as mean ± SD. Result: Low salinity and lower metal toxicity did not affect the fresh weight of the plant. However, this was substantially decreased by 55% at high salinity and heavy metal treatment. With increasing salinity and heavy metal toxicity, the values of all studied physiological traits were significantly decreased. Chlorosis in treated plants was also observed which could be due to changes in Fe:Zn ratio. At low concentrations (upto 25 mM) of NaCl and heavy metals, we did not observe any significant difference in the gene expressions of treated plants compared to control plants. Interestingly, at high salt concentration and high metal toxicity, a significant increase in the expression profile of stress induced genes was observed in treated plants compared to control (p < 0.005). Conclusion: Stevia rebaudiana is tolerant to lower salt and heavy metal concentration. This study also suggests that with the increase in concentrations of salt and heavy metals, harvest yield of S. rebaudiana was hampered.

Keywords: Stevia rebaudiana, natural sweetener, salinity, heavy metal toxicity

Procedia PDF Downloads 172
276 Transport of Inertial Finite-Size Floating Plastic Pollution by Ocean Surface Waves

Authors: Ross Calvert, Colin Whittaker, Alison Raby, Alistair G. L. Borthwick, Ton S. van den Bremer

Abstract:

Large concentrations of plastic have polluted the seas in the last half century, with harmful effects on marine wildlife and potentially to human health. Plastic pollution will have lasting effects because it is expected to take hundreds or thousands of years for plastic to decay in the ocean. The question arises how waves transport plastic in the ocean. The predominant motion induced by waves creates ellipsoid orbits. However, these orbits do not close, resulting in a drift. This is defined as Stokes drift. If a particle is infinitesimally small and the same density as water, it will behave exactly as the water does, i.e., as a purely Lagrangian tracer. However, as the particle grows in size or changes density, it will behave differently. The particle will then have its own inertia, the fluid will exert drag on the particle, because there is relative velocity, and it will rise or sink depending on the density and whether it is on the free surface. Previously, plastic pollution has all been considered to be purely Lagrangian. However, the steepness of waves in the ocean is small, normally about α = k₀a = 0.1 (where k₀ is the wavenumber and a is the wave amplitude), this means that the mean drift flows are of the order of ten times smaller than the oscillatory velocities (Stokes drift is proportional to steepness squared, whilst the oscillatory velocities are proportional to the steepness). Thus, the particle motion must have the forces of the full motion, oscillatory and mean flow, as well as a dynamic buoyancy term to account for the free surface, to determine whether inertia is important. To track the motion of a floating inertial particle under wave action requires the fluid velocities, which form the forcing, and the full equations of motion of a particle to be solved. Starting with the equation of motion of a sphere in unsteady flow with viscous drag. Terms can added then be added to the equation of motion to better model floating plastic: a dynamic buoyancy to model a particle floating on the free surface, quadratic drag for larger particles and a slope sliding term. Using perturbation methods to order the equation of motion into sequentially solvable parts allows a parametric equation for the transport of inertial finite-sized floating particles to be derived. This parametric equation can then be validated using numerical simulations of the equation of motion and flume experiments. This paper presents a parametric equation for the transport of inertial floating finite-size particles by ocean waves. The equation shows an increase in Stokes drift for larger, less dense particles. The equation has been validated using numerical solutions of the equation of motion and laboratory flume experiments. The difference in the particle transport equation and a purely Lagrangian tracer is illustrated using worlds maps of the induced transport. This parametric transport equation would allow ocean-scale numerical models to include inertial effects of floating plastic when predicting or tracing the transport of pollutants.

Keywords: perturbation methods, plastic pollution transport, Stokes drift, wave flume experiments, wave-induced mean flow

Procedia PDF Downloads 96
275 An Exploratory Factor and Cluster Analysis of the Willingness to Pay for Last Mile Delivery

Authors: Maximilian Engelhardt, Stephan Seeck

Abstract:

The COVID-19 pandemic is accelerating the already growing field of e-commerce. The resulting urban freight transport volume leads to traffic and negative environmental impact. Furthermore, the service level of parcel logistics service provider is lacking far behind the expectations of consumer. These challenges can be solved by radically reorganize the urban last mile distribution structure: parcels could be consolidated in a micro hub within the inner city and delivered within time windows by cargo bike. This approach leads to a significant improvement of consumer satisfaction with their overall delivery experience. However, this approach also leads to significantly increased costs per parcel. While there is a relevant share of online shoppers that are willing to pay for such a delivery service there are no deeper insights about this target group available in the literature. Being aware of the importance of knowing target groups for businesses, the aim of this paper is to elaborate the most important factors that determine the willingness to pay for sustainable and service-oriented parcel delivery (factor analysis) and to derive customer segments (cluster analysis). In order to answer those questions, a data set is analyzed using quantitative methods of multivariate statistics. The data set was generated via an online survey in September and October 2020 within the five largest cities in Germany (n = 1.071). The data set contains socio-demographic, living-related and value-related variables, e.g. age, income, city, living situation and willingness to pay. In a prior work of the author, the data was analyzed applying descriptive and inference statistical methods that only provided limited insights regarding the above-mentioned research questions. The analysis in an exploratory way using factor and cluster analysis promise deeper insights of relevant influencing factors and segments for user behavior of the mentioned parcel delivery concept. The analysis model is built and implemented with help of the statistical software language R. The data analysis is currently performed and will be completed in December 2021. It is expected that the results will show the most relevant factors that are determining user behavior of sustainable and service-oriented parcel deliveries (e.g. age, current service experience, willingness to pay) and give deeper insights in characteristics that describe the segments that are more or less willing to pay for a better parcel delivery service. Based on the expected results, relevant implications and conclusions can be derived for startups that are about to change the way parcels are delivered: more customer-orientated by time window-delivery and parcel consolidation, more environmental-friendly by cargo bike. The results will give detailed insights regarding their target groups of parcel recipients. Further research can be conducted by exploring alternative revenue models (beyond the parcel recipient) that could compensate the additional costs, e.g. online-shops that increase their service-level or municipalities that reduce traffic on their streets.

Keywords: customer segmentation, e-commerce, last mile delivery, parcel service, urban logistics, willingness-to-pay

Procedia PDF Downloads 85
274 Cyber-Victimization among Higher Education Students as Related to Academic and Personal Factors

Authors: T. Heiman, D. Olenik-Shemesh

Abstract:

Over the past decade, with the rapid growth of electronic communication, the internet and, in particular, social networking has become an inseparable part of people's daily lives. Along with its benefits, a new type of online aggression has emerged, defined as cyber bullying, a form of interpersonal aggressive behavior that takes place through electronic means. Cyber-bullying is characterized by repetitive behavior over time of maladaptive authority and power usage using computers and cell phones via sending insulting messages and hurtful pictures. Preliminary findings suggest that the prevalence of involvement in cyber-bullying among higher education students varies between 10 and 35%. As to date, universities are facing an uphill effort in trying to restrain online misbehavior. As no studies examined the relationships between cyber-bullying involvement with personal aspects, and its impacts on academic achievement and work functioning, this present study examined the nature of cyber-bullying involvement among 1,052 undergraduate students (mean age = 27.25, S.D = 4.81; 66.2% female), coping with, as well as the effects of social support, perceived self-efficacy, well-being, and body-perception, in relation to cyber-victimization. We assume that students in higher education are a vulnerable population and at high risk of being cyber-victims. We hypothesize that social support might serve as a protective factor and will moderate the relationships between the socio-emotional variables and the occurrence of cyber- victimization. The findings of this study will present the relationships between cyber-victimization and the social-emotional aspects, which constitute risk and protective factors. After receiving approval from the Ethics Committee of the University, a Google Drive questionnaire was sent to a random sample of students, studying in the various University study centers. Students' participation was voluntary, and they completed the five questionnaires anonymously: Cyber-bullying, perceived self-efficacy, subjective well-being, social support and body perception. Results revealed that 11.6% of the students reported being cyber-victims during last year. Examining the emotional and behavioral reactions to cyber-victimization revealed that female emotional and behavioral reactions were significantly greater than the male reactions (p < .001). Moreover, females reported on a significant higher social support compared to men; male reported significantly on a lower social capability than female; and men's body perception was significantly more positive than women's scores. No gender differences were observed for subjective well-being scale. Significant positive correlations were found between cyber-victimization and fewer friends, lower grades, and work ineffectiveness (r = 0.37- .40, p < 0 .001). The results of the Hierarchical regression indicated significantly that cyber-victimization can be predicted by lower social support, lower body perception, and gender (female), that explained 5.6% of the variance (R2 = 0.056, F(5,1047) = 12.47, p < 0.001). The findings deepen our understanding of the students' involvement in cyber-bullying, and present the relationships of the social-emotional and academic aspects on cyber-victim students. In view of our findings, higher education policy could help facilitate coping with cyber-bullying incidents, and student support units could develop intervention programs aimed at reducing cyber-bullying and its impacts.

Keywords: academic and personal factors, cyber-victimization, social support, higher education

Procedia PDF Downloads 267
273 Delivering User Context-Sensitive Service in M-Commerce: An Empirical Assessment of the Impact of Urgency on Mobile Service Design for Transactional Apps

Authors: Daniela Stephanie Kuenstle

Abstract:

Complex industries such as banking or insurance experience slow growth in mobile sales. While today’s mobile applications are sophisticated and enable location based and personalized services, consumers prefer online or even face-to-face services to complete complex transactions. A possible reason for this reluctance is that the provided service within transactional mobile applications (apps) does not adequately correspond to users’ needs. Therefore, this paper examines the impact of the user context on mobile service (m-service) in m-commerce. Motivated by the potential which context-sensitive m-services hold for the future, the impact of temporal variations as a dimension of user context, on m-service design is examined. In particular, the research question asks: Does consumer urgency function as a determinant of m-service composition in transactional apps by moderating the relation between m-service type and m-service success? Thus, the aim is to explore the moderating influence of urgency on m-service types, which includes Technology Mediated Service and Technology Generated Service. While mobile applications generally comprise features of both service types, this thesis discusses whether unexpected urgency changes customer preferences for m-service types and how this consequently impacts the overall m-service success, represented by purchase intention, loyalty intention and service quality. An online experiment with a random sample of N=1311 participants was conducted. Participants were divided into four treatment groups varying in m-service types and urgency level. They were exposed to two different urgency scenarios (high/ low) and two different app versions conveying either technology mediated or technology generated service. Subsequently, participants completed a questionnaire to measure the effectiveness of the manipulation as well as the dependent variables. The research model was tested for direct and moderating effects of m-service type and urgency on m-service success. Three two-way analyses of variance confirmed the significance of main effects, but demonstrated no significant moderation of urgency on m-service types. The analysis of the gathered data did not confirm a moderating effect of urgency between m-service type and service success. Yet, the findings propose an additive effects model with the highest purchase and loyalty intention for Technology Generated Service and high urgency, while Technology Mediated Service and low urgency demonstrate the strongest effect for service quality. The results also indicate an antagonistic relation between service quality and purchase intention depending on the level of urgency. Although a confirmation of the significance of this finding is required, it suggests that only service convenience, as one dimension of mobile service quality, delivers conditional value under high urgency. This suggests a curvilinear pattern of service quality in e-commerce. Overall, the paper illustrates the complex interplay of technology, user variables, and service design. With this, it contributes to a finer-grained understanding of the relation between m-service design and situation dependency. Moreover, the importance of delivering situational value with apps depending on user context is emphasized. Finally, the present study raises the demand to continue researching the impact of situational variables on m-service design in order to develop more sophisticated m-services.

Keywords: mobile consumer behavior, mobile service design, mobile service success, self-service technology, situation dependency, user-context sensitivity

Procedia PDF Downloads 249
272 Leveraging the HDAC Inhibitory Pharmacophore to Construct Deoxyvasicinone Based Tractable Anti-Lung Cancer Agent and pH-Responsive Nanocarrier

Authors: Ram Sharma, Esha Chatterjee, Santosh Kumar Guru, Kunal Nepali

Abstract:

A tractable anti-lung cancer agent was identified via the installation of a Ring C expanded synthetic analogue of the alkaloid vasicinone [7,8,9,10-tetrahydroazepino[2,1-b] quinazolin-12(6H)-one (TAZQ)] as a surface recognition part in the HDAC inhibitory three-component model. Noteworthy to mention that the candidature of TAZQ was deemed suitable for accommodation in HDAC inhibitory pharmacophore as per the results of the fragment recruitment process conducted by our laboratory. TAZQ was pinpointed through the fragment screening program as a synthetically flexible fragment endowed with some moderate cell growth inhibitory activity against the lung cancer cell lines, and it was anticipated that the use of the aforementioned fragment to generate hydroxamic acid functionality (zinc-binding motif) bearing HDAC inhibitors would boost the antitumor efficacy of TAZQ. Consistent with our aim of applying epigenetic targets to the treatment of lung cancer, a strikingly potent anti-lung cancer scaffold (compound 6) was pinpointed through a series of in-vitro experiments. Notably, the compounds manifested a magnificent activity profile against KRAS and EGFR mutant lung cancer cell lines (IC50 = 0.80 - 0.96 µM), and the effects were found to be mediated through preferential HDAC6 inhibition (IC50 = 12.9 nM). In addition to HDAC6 inhibition, the compounds also elicited HDAC1 and HDAC3 inhibitory activity with an IC50 value of 49.9 nM and 68.5 nM, respectively. The HDAC inhibitory ability of compound 6 was also confirmed from the results of the western blot experiment that revealed its potential to decrease the expression levels of HDAC isoforms (HDAC1, HDAC3, and HDAC6). Noteworthy to mention that complete downregulation of the HDAC6 isoform was exerted by compound 6 at 0.5 and 1 µM. Moreover, in another western blot experiment, treatment with hydroxamic acid 6 led to upregulation of H3 acK9 and α-Tubulin acK40 levels, ascertaining its inhibitory activity toward both the class I HDACs and Class II B HDACs. The results of other assays were also encouraging as treatment with compound 6 led to the suppression of the colony formation ability of A549 cells, induction of apoptosis, and increase in autophagic flux. In silico studies led us to rationalize the results of the experimental assay, and some key interactions of compound 6 with the amino acid residues of HDAC isoforms were identified. In light of the impressive activity spectrum of compound 6, a pH-responsive nanocarrier (hyaluronic acid-compound 6 nanoparticles) was prepared. The dialysis bag approach was used for the assessment of the nanoparticles under both normal and acidic circumstances, and the pH-sensitive nature of hyaluronic acid-compound 6 nanoparticles was confirmed. Delightfully, the nanoformulation was devoid of cytotoxicity against the L929 mouse fibroblast cells (normal settings) and exhibited selective cytotoxicity towards the A549 lung cancer cell lines. In a nutshell, compound 6 appears to be a promising adduct, and a detailed investigation of this compound might yield a therapeutic for the treatment of lung cancer.

Keywords: HDAC inhibitors, lung cancer, scaffold, hyaluronic acid, nanoparticles

Procedia PDF Downloads 68
271 Autophagy Promotes Vascular Smooth Muscle Cell Migration in vitro and in vivo

Authors: Changhan Ouyang, Zhonglin Xie

Abstract:

In response to proatherosclerotic factors such as oxidized lipids, or to therapeutic interventions such as angioplasty, stents, or bypass surgery, vascular smooth muscle cells (VSMCs) migrate from the media to the intima, resulting in intimal hyperplasia, restenosis, graft failure, or atherosclerosis. These proatherosclerotic factors also activate autophagy in VSMCs. However, the functional role of autophagy in vascular health and disease remains poorly understood. In the present study, we determined the role of autophagy in the regulation of VSMC migration. Autophagy activity in cultured human aortic smooth muscle cells (HASMCs) and mouse carotid arteries was measured by Western blot analysis of microtubule-associated protein 1 light chain 3 B (LC3B) and P62. The VSMC migration was determined by scratch wound assay and transwell migration assay. Ex vivo smooth muscle cell migration was determined using aortic ring assay. The in vivo SMC migration was examined by staining the carotid artery sections with smooth muscle alpha actin (alpha SMA) after carotid artery ligation. To examine the relationship between autophagy and neointimal hyperplasia, C57BL/6J mice were subjected to carotid artery ligation. Seven days after injury, protein levels of Atg5, Atg7, Beclin1, and LC3B drastically increased and remained higher in the injured arteries three weeks after the injury. In parallel with the activation of autophagy, vascular injury-induced neointimal hyperplasia as estimated by increased intima/media ratio. The en face staining of carotid artery showed that vascular injury enhanced alpha SMA staining in the intimal cells as compared with the sham operation. Treatment of HASMCs with platelet-derived growth factor (PDGF), one of the major factors for vascular remodeling in response to vascular injury, increased Atg7 and LC3 II protein levels and enhanced autophagosome formation. In addition, aortic ring assay demonstrated that PDGF treated aortic rings displayed an increase in neovessel formation compared with control rings. Whole mount staining for CD31 and alpha SMA in PDGF treated neovessels revealed that the neovessel structures were stained by alpha SMA but not CD31. In contrast, pharmacological and genetic suppression of autophagy inhibits VSMC migration. Especially, gene silencing of Atg7 inhibited VSMC migration induced by PDGF. Furthermore, three weeks after ligation, markedly decreased neointimal formation was found in mice treated with chloroquine, an inhibitor of autophagy. Quantitative morphometric analysis of the injured vessels revealed a marked reduction in the intima/media ratio in the mice treated with chloroquine. Conclusion: Autophagy activation increases VSMC migration while autophagy suppression inhibits VSMC migration. These findings suggest that autophagy suppression may be an important therapeutic strategy for atherosclerosis and intimal hyperplasia.

Keywords: autophagy, vascular smooth muscle cell, migration, neointimal formation

Procedia PDF Downloads 286
270 Iran’s Sexual and Reproductive Rights Roll-Back: An Overview of Iran’s New Population Policies

Authors: Raha Bahreini

Abstract:

This paper discusses the roll-back of women’s sexual and reproductive rights in the Islamic Republic of Iran, which has come in the wake of a striking shift in the country’s official population policies. Since the late 1980s, Iran has won worldwide praise for its sexual and reproductive health and services, which have contributed to a steady decline in the country’s fertility rate–from 7.0 births per women in 1980 to 5.5 in 1988, 2.8 in 1996 and 1.85 in 2014. This is owed to a significant increase in the voluntary use of modern contraception in both rural and urban areas. In 1976, only 37 per cent of women were using at least one method of contraception; by 2014 this figure had reportedly risen to a high of nearly 79 per cent for married girls and women living in urban areas and 73.78 per cent for those living in rural areas. Such progress may soon be halted. In July 2012, Iran’s Supreme Leader Ayatollah Sayed Ali Khamenei denounced Iran’s family planning policies as an imitation of Western lifestyle. He exhorted the authorities to increase Iran’s population to 150 to 200 million (from around 78.5 million), including by cutting subsidies for contraceptive methods and dismantling the state’s Family and Population Planning Programme. Shortly thereafter, Iran’s Minister of Health and Medical Education announced the scrapping of the budget for the state-funded Family and Population Planning Programme. Iran’s Parliament subsequently introduced two bills; the Comprehensive Population and Exaltation of Family Bill (Bill 315), and the Bill to Increase Fertility Rates and Prevent Population Decline (Bill 446). Bill 446 outlaws voluntary tubectomies, which are believed to be the second most common method of modern contraception in Iran, and blocks access to information about contraception, denying women the opportunity to make informed decisions about the number and spacing of their children. Coupled with the elimination of state funding for Iran’s Family and Population Programme, the move would undoubtedly result in greater numbers of unwanted pregnancies, forcing more women to seek illegal and unsafe abortions. Bill 315 proposes various discriminatory measures in the areas of employment, divorce, and protection from domestic violence in order to promote a culture wherein wifedom and child-bearing is seen as women’s primary duty. The Bill, for example, instructs private and public entities to prioritize, in sequence, men with children, married men without children and married women with children when hiring for certain jobs. It also bans the recruitment of single individuals as family law lawyers, public and private school teachers and members of the academic boards of universities and higher education institutes. The paper discusses the consequences of these initiatives which would, if continued, set the human rights of women and girls in Iran back by decades, leaving them with a future shaped by increased inequality, discrimination, poor health, limited choices and restricted freedoms, in breach of Iran’s international human rights obligations.

Keywords: family planning and reproductive health, gender equality and empowerment of women, human rights, population growth

Procedia PDF Downloads 280
269 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 57
268 Functional Traits and Agroecosystem Multifunctionality in Summer Cover Crop Mixtures and Monocultures

Authors: Etienne Herrick

Abstract:

As an economically and ecologically feasible method for farmers to introduce greater diversity into their crop rotations, cover cropping presents a valuable opportunity for improving the sustainability of food production. Planted in-between cash crop growing seasons, cover crops serve to enhance agroecosystem functioning, rather than being destined for sale or consumption. In fact, cover crops may hold the capacity to deliver multiple ecosystem functions or services simultaneously (multifunctionality). Building upon this line of research will not only benefit society at present, but also support its continued survival through its potential for restoring depleted soils and reducing the need for energy-intensive and harmful external inputs like fertilizers and pesticides. This study utilizes a trait-based approach to explore the influence of inter- and intra-specific interactions in summer cover crop mixtures and monocultures on functional trait expression and ecosystem services. Functional traits that enhance ecosystem services related to agricultural production include height, specific leaf area (SLA), root, shoot ratio, leaf C and N concentrations, and flowering phenology. Ecosystem services include biomass production, weed suppression, reduced N leaching, N recycling, and support of pollinators. Employing a trait-based approach may allow for the elucidation of mechanistic links between plant structure and resulting ecosystem service delivery. While relationships between some functional traits and the delivery of particular ecosystem services may be readily apparent through existing ecological knowledge (e.g. height positively correlating with weed suppression), this study will begin to quantify those relationships so as to gain further understanding of whether and how measurable variation in functional trait expression across cover crop mixtures and monocultures can serve as a reliable predictor of variation in the types and abundances of ecosystem services delivered. Six cover crop species, including legume, grass, and broadleaf functional types, were selected for growth in six mixtures and their component monocultures based upon the principle of trait complementarity. The tricultures (three-way mixtures) are comprised of a legume, grass, and broadleaf species, and include cowpea/sudex/buckwheat, sunnhemp/sudex/buckwheat, and chickling vetch/oat/buckwheat combinations; the dicultures contain the same legume and grass combinations as above, without the buckwheat broadleaf. By combining species with expectedly complimentary traits (for example, legumes are N suppliers and grasses are N acquirers, creating a nutrient cycling loop) the cover crop mixtures may elicit a broader range of ecosystem services than that provided by a monoculture, though trade-offs could exist. Collecting functional trait data will enable the investigation of the types of interactions driving these ecosystem service outcomes. It also allows for generalizability across a broader range of species than just those selected for this study, which may aid in informing further research efforts exploring species and ecosystem functioning, as well as on-farm management decisions.

Keywords: agroecology, cover crops, functional traits, multifunctionality, trait complementarity

Procedia PDF Downloads 229
267 Active Filtration of Phosphorus in Ca-Rich Hydrated Oil Shale Ash Filters: The Effect of Organic Loading and Form of Precipitated Phosphatic Material

Authors: Päärn Paiste, Margit Kõiv, Riho Mõtlep, Kalle Kirsimäe

Abstract:

For small-scale wastewater management, the treatment wetlands (TWs) as a low cost alternative to conventional treatment facilities, can be used. However, P removal capacity of TW systems is usually problematic. P removal in TWs is mainly dependent on the physico–chemical and hydrological properties of the filter material. Highest P removal efficiency has been shown trough Ca-phosphate precipitation (i.e. active filtration) in Ca-rich alkaline filter materials, e.g. industrial by-products like hydrated oil shale ash (HOSA), metallurgical slags. In this contribution we report preliminary results of a full-scale TW system using HOSA material for P removal for a municipal wastewater at Nõo site, Estonia. The main goals of this ongoing project are to evaluate: a) the long-term P removal efficiency of HOSA using real waste water; b) the effect of high organic loading rate; c) variable P-loading effects on the P removal mechanism (adsorption/direct precipitation); and d) the form and composition of phosphate precipitates. Onsite full-scale experiment with two concurrent filter systems for treatment of municipal wastewater was established in September 2013. System’s pretreatment steps include septic tank (2 m2) and vertical down-flow LECA filters (3 m2 each), followed by horizontal subsurface HOSA filters (effective volume 8 m3 each). Overall organic and hydraulic loading rates of both systems are the same. However, the first system is operated in a stable hydraulic loading regime and the second in variable loading regime that imitates the wastewater production in an average household. Piezometers for water and perforated sample containers for filter material sampling were incorporated inside the filter beds to allow for continuous in-situ monitoring. During the 18 months of operation the median removal efficiency (inflow to outflow) of both systems were over 99% for TP, 93% for COD and 57% for TN. However, we observed significant differences in the samples collected in different points inside the filter systems. In both systems, we observed development of preferred flow paths and zones with high and low loadings. The filters show formation and a gradual advance of a “dead” zone along the flow path (zone with saturated filter material characterized by ineffective removal rates), which develops more rapidly in the system working under variable loading regime. The formation of the “dead” zone is accompanied by the growth of organic substances on the filter material particles that evidently inhibit the P removal. Phase analysis of used filter materials using X-ray diffraction method reveals formation of minor amounts of amorphous Ca-phosphate precipitates. This finding is supported by ATR-FTIR and SEM-EDS measurements, which also reveal Ca-phosphate and authigenic carbonate precipitation. Our first experimental results demonstrate that organic pollution and loading regime significantly affect the performance of hydrated ash filters. The material analyses also show that P is incorporated into a carbonate substituted hydroxyapatite phase.

Keywords: active filtration, apatite, hydrated oil shale ash, organic pollution, phosphorus

Procedia PDF Downloads 254
266 Plastic Behavior of Steel Frames Using Different Concentric Bracing Configurations

Authors: Madan Chandra Maurya, A. R. Dar

Abstract:

Among the entire natural calamities earthquake is the one which is most devastating. If the losses due to all other calamities are added still it will be very less than the losses due to earthquakes. So it means we must be ready to face such a situation, which is only possible if we make our structures earthquake resistant. A review of structural damages to the braced frame systems after several major earthquakes—including recent earthquakes—has identified some anticipated and unanticipated damage. This damage has prompted many engineers and researchers around the world to consider new approaches to improve the behavior of braced frame systems. Extensive experimental studies over the last fourty years of conventional buckling brace components and several braced frame specimens have been briefly reviewed, highlighting that the number of studies on the full-scale concentric braced frames is still limited. So for this reason the study surrounds the words plastic behavior, steel structure, brace frame system. In this study, there are two different analytical approaches which have been used to predict the behavior and strength of an un-braced frame. The first is referred as incremental elasto-plastic analysis a plastic approach. This method gives a complete load-deflection history of the structure until collapse. It is based on the plastic hinge concept for fully plastic cross sections in a structure under increasing proportional loading. In this, the incremental elasto-plastic analysis- hinge by hinge method is used in this study because of its simplicity to know the complete load- deformation history of two storey un-braced scaled model. After that the experiments were conducted on two storey scaled building model with and without bracing system to know the true or experimental load deformation curve of scaled model. Only way, is to understand and analyze these techniques and adopt these techniques in our structures. The study named as Plastic Behavior of Steel Frames using Different Concentric Bracing Configurations deals with all this. This study aimed at improving the already practiced traditional systems and to check the behavior and its usefulness with respect to X-braced system as reference model i.e. is how plastically it is different from X-braced. Laboratory tests involved determination of plastic behavior of these models (with and without brace) in terms of load-deformation curve. Thus, the aim of this study is to improve the lateral displacement resistance capacity by using new configuration of brace member in concentric manner which is different from conventional concentric brace. Once the experimental and manual results (using plastic approach) compared, simultaneously the results from both approach were also compared with nonlinear static analysis (pushover analysis) approach using ETABS i.e how both the previous results closely depicts the behavior in pushover curve and upto what limit. Tests results shows that all the three approaches behaves somewhat in similar manner upto yield point and also the applicability of elasto-plastic analysis (hinge by hinge method) to know the plastic behavior. Finally the outcome from three approaches shows that the newer one configuration which is chosen for study behaves in-between the plane frame (without brace or reference frame) and the conventional X-brace frame.

Keywords: elasto-plastic analysis, concentric steel braced frame, pushover analysis, ETABS

Procedia PDF Downloads 206
265 E-Governance: A Key for Improved Public Service Delivery

Authors: Ayesha Akbar

Abstract:

Public service delivery has witnessed a significant improvement with the integration of information communication technology (ICT). It not only improves management structure with advanced technology for surveillance of service delivery but also provides evidence for informed decisions and policy. Pakistan’s public sector organizations have not been able to produce some good results to ensure service delivery. Notwithstanding, some of the public sector organizations in Pakistan has diffused modern technology and proved their credence by providing better service delivery standards. These good indicators provide sound basis to integrate technology in public sector organizations and shift of policy towards evidence based policy making. Rescue-1122 is a public sector organization which provides emergency services and proved to be a successful model for the provision of service delivery to save human lives and to ensure human development in Pakistan. The information about the organization has been received by employing qualitative research methodology. The information is broadly based on primary and secondary sources which includes Rescue-1122 website, official reports of organizations; UNDP (United Nation Development Program), WHO (World Health Organization) and by conducting 10 in-depth interviews with the high administrative staff of organizations who work in the Lahore offices. The information received has been incorporated with the study for the better understanding of the organization and their management procedures. Rescue-1122 represents a successful model in delivering the services in an efficient way to deal with the disaster management. The management of Rescue has strategized the policies and procedures in such a way to develop a comprehensive model with the integration of technology. This model provides efficient service delivery as well as maintains the standards of the organization. The service delivery model of rescue-1122 works on two fronts; front-office interface and the back-office interface. Back-office defines the procedures of operations and assures the compliance of the staff whereas, front-office equipped with the latest technology and good infrastructure handles the emergency calls. Both ends are integrated with satellite based vehicle tracking, wireless system, fleet monitoring system and IP camera which monitors every move of the staff to provide better services and to pinpoint the distortions in the services. The standard time of reaching to the emergency spot is 7 minutes, and during entertaining the case; driver‘s behavior, traffic volume and the technical assistance being provided to the emergency case is being monitored by front-office. Then the whole information get uploaded to the main dashboard of Lahore headquarter from the provincial offices. The latest technology is being materialized by Rescue-1122 for delivering the efficient services, investigating the flaws; if found, and to develop data to make informed decision making. The other public sector organizations of Pakistan can also develop such models to integrate technology for improving service delivery and to develop evidence for informed decisions and policy making.

Keywords: data, e-governance, evidence, policy

Procedia PDF Downloads 222
264 Keeping under the Hat or Taking off the Lid: Determinants of Social Enterprise Transparency

Authors: Echo Wang, Andrew Li

Abstract:

Transparency could be defined as the voluntary release of information by institutions that is relevant to their own evaluation. Transparency based on information disclosure is recognised to be vital for the Third Sector, as civil society organisations are under pressure to become more transparent to answer the call for accountability. The growing importance of social enterprises as hybrid organisations emerging from the nexus of the public, the private and the Third Sector makes their transparency a topic worth exploring. However, transparency for social enterprises has not yet been studied: as a new form of organisation that combines non-profit missions with commercial means, it is unclear to both the practical and the academic world if the shift in operational logics from non-profit motives to for-profit pursuits has significantly altered their transparency. This is especially so in China, where informational governance and practices of information disclosure by local governments, industries and civil society are notably different from other countries. This study investigates the transparency-seeking behaviour of social enterprises in Greater China to understand what factors at the organisational level may affect their transparency, measured by their willingness to disclose financial information. We make use of the Survey on the Models and Development Status of Social Enterprises in the Greater China Region (MDSSGCR) conducted in 2015-2016. The sample consists of more than 300 social enterprises from the Mainland, Hong Kong and Taiwan. While most respondents have provided complete answers to most of the questions, there is tremendous variation in the respondents’ demonstrated level of transparency in answering those questions related to the financial aspects of their organisations, such as total revenue, net profit, source of revenue and expense. This has led to a lot of missing data on such variables. In this study, we take missing data as data. Specifically, we use missing values as a proxy for an organisation’s level of transparency. Our dependent variables are constructed from missing data on total revenue, net profit, source of revenue and cost breakdown. In addition, we also take into consideration the quality of answers in coding the dependent variables. For example, to be coded as being transparent, an organization must report the sources of at least 50% of its revenue. We have four groups of predictors of transparency, namely nature of organization, decision making body, funding channel and field of concentration. Furthermore, we control for an organisation’s stage of development, self-identity and region. The results show that social enterprises that are at their later stages of organisational development and are funded by financial means are significantly more transparent than others. There is also some evidence that social enterprises located in the Northeast region in China are less transparent than those located in other regions probably because of local political economy features. On the other hand, the nature of the organisation, the decision-making body and field of concentration do not systematically affect the level of transparency. This study provides in-depth empirical insights into the information disclosure behaviour of social enterprises under specific social context. It does not only reveal important characteristics of Third Sector development in China, but also contributes to the general understanding of hybrid institutions.

Keywords: China, information transparency, organisational behaviour, social enterprise

Procedia PDF Downloads 154
263 Municipalities as Enablers of Citizen-Led Urban Initiatives: Possibilities and Constraints

Authors: Rosa Nadine Danenberg

Abstract:

In recent years, bottom-up urban development has started growing as an alternative to conventional top-down planning. In large proportions, citizens and communities initiate small-scale interventions; suddenly seeming to form a trend. As a result, more and more cities are witnessing not only the growth of but also an interest in these initiatives, as they bear the potential to reshape urban spaces. Such alternative city-making efforts cause new dynamics in urban governance, with inevitable consequences for the controlled city planning and its administration. The emergence of enabling relationships between top-down and bottom-up actors signals an increasingly common urban practice. Various case studies show that an enabling relationship is possible, yet, how it can be optimally realized stays rather underexamined. Therefore, the seemingly growing worldwide phenomenon of ‘municipal bottom-up urban development’ necessitates an adequate governance structure. As such, the aim of this research is to contribute knowledge to how municipalities can enable citizen-led urban initiatives from a governance innovation perspective. Empirical case-study research in Stockholm and Istanbul, derived from interviews with founders of four citizen-led urban initiatives and one municipal representative in each city, provided valuable insights to possibilities and constraints for enabling practices. On the one hand, diverging outcomes emphasize the extreme oppositional features of both cases (Stockholm and Istanbul). Firstly, both cities’ characteristics are drastically different. Secondly, the ideologies and motifs for the initiatives to emerge vary widely. Thirdly, the major constraints for citizen-led urban initiatives to relate to the municipality are considerably different. Two types of municipality’s organizational structures produce different underlying mechanisms which demonstrate the constraints. The first municipal organizational structure is steered by bureaucracy (Stockholm). It produces an administrative division that brings up constraints such as the lack of responsibility, transparency and continuity by municipal representatives. The second structure is dominated by municipal politics and governmental hierarchy (Istanbul). It produces informality, lack of transparency and a fragmented civil society. In order to cope with the constraints produced by both types of organizational structures, the initiatives have adjusted their organization to the municipality’s underlying structures. On the other hand, this paper has in fact also come to a rather unifying conclusion. Interestingly, the suggested possibilities for an enabling relationship underline converging new urban governance arrangements. This could imply that for the two varying types of municipality’s organizational structures there is an accurate governance structure. Namely, the combination of a neighborhood council with a municipal guide, with allowance for the initiatives to adopt a politicizing attitude is found as coinciding. Especially its combination appears key to redeem varying constraints. A municipal guide steers the initiatives through bureaucratic struggles, is supported by coproduction methods, while it balances out municipal politics. Next, a neighborhood council, that is politically neutral and run by local citizens, can function as an umbrella for citizen-led urban initiatives. What is crucial is that it should cater for a more entangled relationship between municipalities and initiatives with enhanced involvement of the initiatives in decision-making processes and limited involvement of prevailing constraints pointed out in this research.

Keywords: bottom-up urban development, governance innovation, Istanbul, Stockholm

Procedia PDF Downloads 193
262 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 317
261 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time

Authors: Deepak Loura

Abstract:

The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.

Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture

Procedia PDF Downloads 54
260 Sustainable Crop Production: Greenhouse Gas Management in Farm Value Chain

Authors: Aswathaman Vijayan, Manish Jha, Ullas Theertha

Abstract:

Climate change and Global warming have become an issue for both developed and developing countries and perhaps the biggest threat to the environment. We at ITC Limited believe that a company’s performance must be measured by its Triple Bottom Line contribution to building economic, social and environmental capital. This Triple Bottom Line strategy focuses on - Embedding sustainability in business practices, Investing in social development and Adopting a low carbon growth path with a cleaner environment approach. The Agri Business Division - ILTD operates in the tobacco crop growing regions of Andhra Pradesh and Karnataka province of India. The Agri value chain of the company comprises of two distinct phases: First phase is Agricultural operations undertaken by ITC trained farmers and the second phase is Industrial operations which include marketing and processing of the agricultural produce. This research work covers the Greenhouse Gas (GHG) management strategy of ITC in the Agricultural operations undertaken by the farmers. The agriculture sector adds considerably to global GHG emissions through the use of carbon-based energies, use of fertilizers and other farming operations such as ploughing. In order to minimize the impact of farming operations on the environment, ITC has a taken a big leap in implementing system and process in reducing the GHG impact in farm value chain by partnering with the farming community. The company has undertaken a unique three-pronged approach for GHG management at the farm value chain: 1) GHG inventory at farm value chain: Different sources of GHG emission in the farm value chain were identified and quantified for the baseline year, as per the IPCC guidelines for greenhouse gas inventories. The major sources of emission identified are - emission due to nitrogenous fertilizer application during seedling production and main-field; emission due to diesel usage for farm machinery; emission due to fuel consumption and due to burning of crop residues. 2) Identification and implementation of technologies to reduce GHG emission: Various methodologies and technologies were identified for each GHG emission source and implemented at farm level. The identified methodologies are – reducing the consumption of chemical fertilizer usage at the farm through site-specific nutrient recommendation; Usage of sharp shovel for land preparation to reduce diesel consumption; implementation of energy conservation technologies to reduce fuel requirement and avoiding burning of crop residue by incorporation in the main field. These identified methodologies were implemented at farm level, and the GHG emission was quantified to understand the reduction in GHG emission. 3) Social and farm forestry for CO2 sequestration: In addition, the company encouraged social and farm forestry in the waste lands to convert it into green cover. The plantations are carried out with fast growing trees viz., Eucalyptus, Casuarina, and Subabul at the rate of 10,000 Ha of land per year. The above approach minimized considerable amount of GHG emission at the farm value chain benefiting farmers, community, and environment at a whole. In addition, the CO₂ stock created by social and farm forestry program has made the farm value chain to become environment-friendly.

Keywords: CO₂ sequestration, farm value chain, greenhouse gas, ITC limited

Procedia PDF Downloads 273
259 Implementation of Hybrid Curriculum in Canadian Dental Schools to Manage Child Abuse and Neglect

Authors: Priyajeet Kaur Kaleka

Abstract:

Introduction: A dentist is often the first responder in the battle for a patient’s healthy body and maybe the first health professional to observe signs of child abuse, be it physical, emotional, and/or sexual mistreatment. Therefore, it is an ethical responsibility for the dental clinician to detect and report suspected cases of child abuse and neglect (CAN). The main reasons for not reporting suspected cases of CAN, with special emphasis on the third: 1) Uncertainty of the diagnosis, 2) Lack of knowledge of the reporting procedure, and 3) Child abuse and neglect somewhat remained the subject of ignorance among dental professionals because of a lack of advance clinical training. Given these epidemic proportions, there is a scope of further research about dental school curriculum design. Purpose: This study aimed to assess the knowledge and attitude of dentists in Canada regarding signs and symptoms of child abuse and neglect (CAN), reporting procedures, and whether educational strategies followed by dental schools address this sensitive issue. In pursuit of that aim, this abstract summarizes the evidence related to this question. Materials and Methods: Data was collected through a specially designed questionnaire adapted and modified from the author’s previous cross-sectional study on (CAN), which was conducted in Pune, India, in 2016 and is available on the database of PubMed. Design: A random sample was drawn from the targeted population of registered dentists and dental students in Canada regarding their knowledge, professional responsibilities, and behavior concerning child abuse. Questionnaire data were distributed to 200 members. Out of which, a total number of 157 subjects were in the final sample for statistical analysis, yielding response of 78.5%. Results: Despite having theoretical information on signs and symptoms, 55% of the participants indicated they are not confident to detect child physical abuse cases. 90% of respondents believed that recognition and handling the CAN cases should be a part of undergraduate training. Only 4.5% of the participants have correctly identified all signs of abuse due to inadequate formal training in dental schools and workplaces. Although nearly 96.3% agreed that it is a dentist’s legal responsibility to report CAN, only a small percentage of the participants reported an abuse case in the past. While 72% stated that the most common factor that might prevent a dentist from reporting a case was doubt over the diagnosis. Conclusion: The goal is to motivate dental schools to deal with this critical issue and provide their students with consummate training to strengthen their capability to care for and protect children. The educational institutions should make efforts to spread awareness among dental students regarding the management and tackling of CAN. Clinical Significance: There should be modifications in the dental school curriculum focusing on problem-based learning models to assist graduates to fulfill their legal and professional responsibilities. CAN literacy should be incorporated into the dental curriculum, which will eventually benefit future dentists to break this intergenerational cycle of violence.

Keywords: abuse, child abuse and neglect, dentist knowledge, dental school curriculum, problem-based learning

Procedia PDF Downloads 178