Search results for: Diversity Index
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5011

Search results for: Diversity Index

241 Comparative Study on Efficacy and Clinical Outcomes in Minimally Invasive Surgery Transforaminal Interbody Fusion vs Minimally Invasive Surgery Lateral Interbody Fusion

Authors: Sundaresan Soundararajan, George Ezekiel Silvananthan, Chor Ngee Tan

Abstract:

Introduction: Transforaminal Interbody Fusion (TLIF) has been adopted for many decades now, however, XLIF, still in relative infancy, has grown to be accepted as a new Minimally Invasive Surgery (MIS) option. There is a paucity of reports directly comparing lateral approach surgery to other MIS options such as TLIF in the treatment of lumbar degenerative disc diseases. Aims/Objectives: The objective of this study was to compare the efficacy and clinical outcomes between Minimally Invasive Transforaminal Interbody Fusion (TLIF) and Minimally Invasive Lateral Interbody Fusion (XLIF) in the treatment of patients with degenerative disc disease of the lumbar spine. Methods: A single center, retrospective cohort study involving a total of 38 patients undergoing surgical intervention between 2010 and 2013 for degenerative disc disease of lumbar spine at single L4/L5 level. 18 patients were treated with MIS TLIF, and 20 patients were treated with XLIF. Results: The XLIF group showed shorter duration of surgery compared to the TLIF group (176 mins vs. 208.3 mins, P = 0.03). Length of hospital stay was also significantly shorter in XLIF group (5.9 days vs. 9 days, p = 0.03). Intraoperative blood loss was favouring XLIF as 85% patients had blood loss less than 100cc compared to 58% in the TLIF group (P = 0.03). Radiologically, disc height was significantly improved post operatively in the XLIF group compared to the TLIF group (0.56mm vs. 0.39mm, P = 0.01). Foraminal height increment was also higher in the XLIF group (0.58mm vs. 0.45mm , P = 0.06). Clinically, back pain and leg pain improved in 85% of patients in the XLIF group and 78% in the TLIF group. Post op hip flexion weakness was more common in the XLIF group (40%) than in the TLIF group (0%). However, this weakness resolved within 6 months post operatively. There was one case of dural tear and surgical site infection in the TLIF group respectively and none in the XLIF group. Visual Analog Scale (VAS) score 6 months post operatively showed comparable reduction in both groups. TLIF group had Owsterty Disability Index (ODI) improvement on 67% while XLIF group showed improvement of 70% of its patients. Conclusions: Lateral approach surgery shows comparable clinical outcomes in resolution of back pain and radiculopathy to conventional MIS techniques such as TLIF. With significantly shorter duration of surgical time, minimal blood loss and shorter hospital stay, XLIF seems to be a reasonable MIS option compared to other MIS techniques in treating degenerative lumbar disc diseases.

Keywords: extreme lateral interbody fusion, lateral approach, minimally invasive, XLIF

Procedia PDF Downloads 187
240 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling

Authors: Mohammed El Raey, Moustafa Osman Mohammed

Abstract:

The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.

Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology

Procedia PDF Downloads 54
239 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates

Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc

Abstract:

Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.

Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS

Procedia PDF Downloads 334
238 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form

Authors: Claudia Trillo

Abstract:

Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.

Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie

Procedia PDF Downloads 82
237 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites

Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek

Abstract:

Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.

Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding

Procedia PDF Downloads 185
236 Environmental Planning for Sustainable Utilization of Lake Chamo Biodiversity Resources: Geospatially Supported Approach, Ethiopia

Authors: Alemayehu Hailemicael Mezgebe, A. J. Solomon Raju

Abstract:

Context: Lake Chamo is a significant lake in the Ethiopian Rift Valley, known for its diversity of wildlife and vegetation. However, the lake is facing various threats due to human activities and global effects. The poor management of resources could lead to food insecurity, ecological degradation, and loss of biodiversity. Research Aim: The aim of this study is to analyze the environmental implications of lake level changes using GIS and remote sensing. The research also aims to examine the floristic composition of the lakeside vegetation and propose spatially oriented environmental planning for the sustainable utilization of the biodiversity resources. Methodology: The study utilizes multi-temporal satellite images and aerial photographs to analyze the changes in the lake area over the past 45 years. Geospatial analysis techniques are employed to assess land use and land cover changes and change detection matrix. The composition and role of the lakeside vegetation in the ecological and hydrological functions are also examined. Findings: The analysis reveals that the lake has shrunk by 14.42% over the years, with significant modifications to its upstream segment. The study identifies various threats to the lake-wetland ecosystem, including changes in water chemistry, overfishing, and poor waste management. The study also highlights the impact of human activities on the lake's limnology, with an increase in conductivity, salinity, and alkalinity. Floristic composition analysis of the lake-wetland ecosystem showed definite pattern of the vegetation distribution. The vegetation composition can be generally categorized into three belts namely, the herbaceous belt, the legume belt and the bush-shrub-small trees belt. The vegetation belts collectively act as different-sized sieve screen system and calm down the pace of incoming foreign matter. This stratified vegetation provides vital information to decide the management interventions for the sustainability of lake-wetland ecosystem.Theoretical Importance: The study contributes to the understanding of the environmental changes and threats faced by Lake Chamo. It provides insights into the impact of human activities on the lake-wetland ecosystem and emphasizes the need for sustainable resource management. Data Collection and Analysis Procedures: The study utilizes aerial photographs, satellite imagery, and field observations to collect data. Geospatial analysis techniques are employed to process and analyze the data, including land use/land cover changes and change detection matrices. Floristic composition analysis is conducted to assess the vegetation patterns Question Addressed: The study addresses the question of how lake level changes and human activities impact the environmental health and biodiversity of Lake Chamo. It also explores the potential opportunities and threats related to water utilization and waste management. Conclusion: The study recommends the implementation of spatially oriented environmental planning to ensure the sustainable utilization and maintenance of Lake Chamo's biodiversity resources. It emphasizes the need for proper waste management, improved irrigation facilities, and a buffer zone with specific vegetation patterns to restore and protect the lake outskirt.

Keywords: buffer zone, geo-spatial, lake chamo, lake level changes, sustainable utilization

Procedia PDF Downloads 40
235 A Study on Economic Impacts of Entrepreneurial Firms and Self-Employment: Minority Ethnics in Putatan, Penampang, Inanam, Menggatal, Uitm, Tongod, Sabah, Malaysia

Authors: Lizinis Cassendra Frederick Dony, Jirom Jeremy Frederick Dony, Andrew Nicholas, Dewi Binti Tajuddin

Abstract:

Starting and surviving a business is influenced by various entrepreneurship socio-economics activities. The study revealed that some of the entrepreneurs are not registered under SME but running own business as an intermediary with the private organization entrusted as “Self-Employed.” SME is known as “Small Medium Enterprise” contributes growth in Malaysia. Therefore, the entrepreneurialism business interest and entrepreneurial intention enhancing new spurring production, expanding employment opportunities, increasing productivity, promoting exports, stimulating innovation and providing new avenue in the business market place. This study has identified the unique contribution to the full understanding of complex mechanisms through entrepreneurship obstacles and education impacts on happiness and well-being to society. Moreover, “Ethnic” term has defined as a curious meaning refers to a classification of a large group of people customs implies to ancestral, racial, national, tribal, religious, linguistic and cultural origins. It is a social phenomenon.1 According to Sabah data population is amounting to 2,389,494 showed the predominant ethnic group being the Kadazan Dusun (18.4%) followed by Bajau (17.3%) and Malays (15.3%). For the year 2010, data statistic immigrants population report showed the amount to 239,765 people which cover 4% of the Sabahan’s population.2 Sabah has numerous group of talented entrepreneurs. The business environment among the minority ethnics are influenced with the business sentiment competition. The literature on ethnic entrepreneurship recognizes two main type entrepreneurships: the middleman and enclave entrepreneurs. According to Adam Smith,3 there are evidently some principles disposition to admire and maintain the distinction business rank status and cause most universal business sentiments. Due to credit barriers competition, the minority ethnics are losing the business market and since 2014, many illegal immigrants have been found to be using permits of the locals to operate businesses in Malaysia.4 The development of small business entrepreneurship among the minority ethnics in Sabah evidenced based variety of complex perception and differences concepts. The studies also confirmed the effects of heterogeneity on group decision and thinking caused partly by excessive pre-occupation with maintaining cohesiveness and the presence of cultural diversity in groups should reduce its probability.5 The researchers proposed that there are seven success determinants particularly to determine the involvement of minority ethnics comparing to the involvement of the immigrants in Sabah. Although, (SMEs) have always been considered the backbone of the economy development, the minority ethnics are often categorized it as the “second-choice.’ The study showed that illegal immigrants entrepreneur imposed a burden on Sabahan social programs as well as the prison, court and health care systems. The tension between the need for cheap labor and the impulse to protect Malaysian in Sabah workers, entrepreneurs and taxpayers, among the subjects discussed in this study. This is clearly can be advantages and disadvantages to the Sabah economic development.

Keywords: entrepreneurial firms, self-employed, immigrants, minority ethnic, economic impacts

Procedia PDF Downloads 387
234 Structural and Morphological Characterization of the Biomass of Aquatics Macrophyte (Egeria densa) Submitted to Thermal Pretreatment

Authors: Joyce Cruz Ferraz Dutra, Marcele Fonseca Passos, Rubens Maciel Filho, Douglas Fernandes Barbin, Gustavo Mockaitis

Abstract:

The search for alternatives to control hunger in the world, generated a major environmental problem. Intensive systems of fish production can cause an imbalance in the aquatic environment, triggering the phenomenon of eutrophication. Currently, there are many forms of growth control aquatic plants, such as mechanical withdrawal, however some difficulties arise for their final destination. The Egeria densa is a species of submerged aquatic macrophyte-rich in cellulose and low concentrations of lignin. By applying the concept of second generation energy, which uses lignocellulose for energy production, the reuse of these aquatic macrophytes (Egeria densa) in the biofuels production can turn an interesting alternative. In order to make lignocellulose sugars available for effective fermentation, it is important to use pre-treatments in order to separate the components and modify the structure of the cellulose and thus facilitate the attack of the microorganisms responsible for the fermentation. Therefore, the objective of this research work was to evaluate the structural and morphological transformations occurring in the biomass of aquatic macrophytes (E.densa) submitted to a thermal pretreatment. The samples were collected in an intensive fish growing farm, in the low São Francisco dam, in the northeastern region of Brazil. After collection, the samples were dried in a 65 0C ventilation oven and milled in a 5mm micron knife mill. A duplicate assay was carried, comparing the in natural biomass with the pretreated biomass with heat (MT). The sample (MT) was submitted to an autoclave with a temperature of 1210C and a pressure of 1.1 atm, for 30 minutes. After this procedure, the biomass was characterized in terms of degree of crystallinity and morphology, using X-ray diffraction (XRD) techniques and scanning electron microscopy (SEM), respectively. The results showed that there was a decrease of 11% in the crystallinity index (% CI) of the pretreated biomass, leading to the structural modification in the cellulose and greater presence of amorphous structures. Increases in porosity and surface roughness of the samples were also observed. These results suggest that biomass may become more accessible to the hydrolytic enzymes of fermenting microorganisms. Therefore, the morphological transformations caused by the thermal pretreatment may be favorable for a subsequent fermentation and, consequently, a higher yield of biofuels. Thus, the use of thermally pretreated aquatic macrophytes (E.densa) can be an environmentally, financially and socially sustainable alternative. In addition, it represents a measure of control for the aquatic environment, which can generate income (biogas production) and maintenance of fish farming activities in local communities.

Keywords: aquatics macrophyte, biofuels, crystallinity, morphology, pretreatment thermal

Procedia PDF Downloads 307
233 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture

Authors: Zakia Hbellaq

Abstract:

The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.

Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants

Procedia PDF Downloads 121
232 Understanding Different Facets of Chromosome Abnormalities: A 17-year Cytogenetic Study and Indian Perspectives

Authors: Lakshmi Rao Kandukuri, Mamata Deenadayal, Suma Prasad, Bipin Sethi, Srinadh Buragadda, Lalji Singh

Abstract:

Worldwide; at least 7.6 million children are born annually with severe genetic or congenital malformations and among them 90% of these are born in mid and low-income countries. Precise prevalence data are difficult to collect, especially in developing countries, owing to the great diversity of conditions and also because many cases remain undiagnosed. The genetic and congenital disorder is the second most common cause of infant and childhood mortality and occurs with a prevalence of 25-60 per 1000 births. The higher prevalence of genetic diseases in a particular community may, however, be due to some social or cultural factors. Such factors include the tradition of consanguineous marriage, which results in a higher rate of autosomal recessive conditions including congenital malformations, stillbirths, or mental retardation. Genetic diseases can vary in severity, from being fatal before birth to requiring continuous management; their onset covers all life stages from infancy to old age. Those presenting at birth are particularly burdensome and may cause early death or life-long chronic morbidity. Genetic testing for several genetic diseases identifies changes in chromosomes, genes, or proteins. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use and more are being developed. Chromosomal abnormalities are the major cause of human suffering, which are implicated in mental retardation, congenital malformations, dysmorphic features, primary and secondary amenorrhea, reproductive wastage, infertility neoplastic diseases. Cytogenetic evaluation of patients is helpful in the counselling and management of affected individuals and families. We present here especially chromosomal abnormalities which form a major part of genetic disease burden in India. Different programmes on chromosome research and human reproductive genetics primarily relate to infertility since this is a major public health problem in our country, affecting 10-15 percent of couples. Prenatal diagnosis of chromosomal abnormalities in high-risk pregnancies helps in detecting chromosomally abnormal foetuses. Such couples are counselled regarding the continuation of pregnancy. In addition to the basic research, the team is providing chromosome diagnostic services that include conventional and advanced techniques for identifying various genetic defects. Other than routine chromosome diagnosis for infertility, also include patients with short stature, hypogonadism, undescended testis, microcephaly, delayed developmental milestones, familial, and isolated mental retardation, and cerebral palsy. Thus, chromosome diagnostics has found its applicability not only in disease prevention and management but also in guiding the clinicians in certain aspects of treatment. It would be appropriate to affirm that chromosomes are the images of life and they unequivocally mirror the states of human health. The importance of genetic counseling is increasing with the advancement in the field of genetics. The genetic counseling can help families to cope with emotional, psychological, and medical consequences of genetic diseases.

Keywords: India, chromosome abnormalities, genetic disorders, cytogenetic study

Procedia PDF Downloads 283
231 Nature of Body Image Distortion in Eating Disorders

Authors: Katri K. Cornelissen, Lise Gulli Brokjob, Kristofor McCarty, Jiri Gumancik, Martin J. Tovee, Piers L. Cornelissen

Abstract:

Recent research has shown that body size estimation of healthy women is driven by independent attitudinal and perceptual components. The attitudinal component represents psychological concerns about body, coupled to low self-esteem and a tendency towards depressive symptomatology, leading to over-estimation of body size, independent of the Body Mass Index (BMI) someone actually has. The perceptual component is a normal bias known as contraction bias, which, for bodies is dependent on actual BMI. Women with a BMI less than the population norm tend to overestimate their size, while women with a BMI greater than the population norm tend to underestimate their size. Women whose BMI is close to the population mean are most accurate. This is indexed by a regression of estimated BMI on actual BMI with a slope less than one. It is well established that body dissatisfaction, i.e. an attitudinal distortion, leads to body size overestimation in eating disordered individuals. However, debate persists as to whether women with eating disorders may also suffer a perceptual body distortion. Therefore, the current study was set to ask whether women with eating disorders exhibit the normal contraction bias when they estimate their own body size. If they do not, this would suggest differences in the way that women with eating disorders process the perceptual aspects of body shape and size in comparison to healthy controls. 100 healthy controls and 33 women with a history of eating disorders were recruited. Critically, it was ensured that both groups of participants represented comparable and adequate ranges of actual BMI (e.g. ~18 to ~40). Of those with eating disorders, 19 had a history of anorexia nervosa, 6 bulimia nervosa, and 8 OSFED. 87.5% of the women with a history of eating disorders self-reported that they were either recovered or recovering, and 89.7% of them self-reported that they had had one or more instances of relapse. The mean time lapsed since first diagnosis was 5 years and on average participants had experienced two relapses. Participants were asked to fill number of psychometric measures (EDE-Q, BSQ, RSE, BDI) to establish the attitudinal component of their body image as well as their tendency to internalize socio-cultural body ideals. Additionally, participants completed a method of adjustment psychophysical task, using photorealistic avatars calibrated for BMI, in order to provide an estimate of their own body size and shape. The data from the healthy controls replicate previous findings, revealing independent contributions to body size estimation from both attitudinal and perceptual (i.e. contraction bias) body image components, as described above. For the eating disorder group, once the adequacy of their actual BMI ranges was established, a regression of estimated BMI on actual BMI had a slope greater than 1, significantly different to that from controls. This suggests that (some) eating disordered individuals process the perceptual aspects of body image differently from healthy controls. It therefore is necessary to develop interventions which are specific to the perceptual processing of body shape and size for the management of (some) individuals with eating disorders.

Keywords: body image distortion, perception, recovery, relapse, BMI, eating disorders

Procedia PDF Downloads 45
230 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective

Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan

Abstract:

Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.

Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence

Procedia PDF Downloads 411
229 Association of Body Composition Parameters with Lower Limb Strength and Upper Limb Functional Capacity in Quilombola Remnants

Authors: Leonardo Costa Pereira, Frederico Santos Santana, Mauro Karnikowski, Luís Sinésio Silva Neto, Aline Oliveira Gomes, Marisete Peralta Safons, Margô Gomes De Oliveira Karnikowski

Abstract:

In Brazil, projections of population aging follow all world projections, the birth rate tends to be surpassed by the mortality rate around the year 2045. Historically, the population of Brazilian blacks suffered for several centuries from the oppression of dominant classes. A group, especially of blacks, stands out in relation to territorial, historical and social aspects, and for centuries they have isolated themselves in small communities, in order to maintain their freedom and culture. The isolation of the Quilombola communities generated socioeconomic effects as well as the health of these blacks. Thus, the objective of the present study is to verify the association of body composition parameters with lower and upper limb strength and functional capacity in Quilombola remnants. The research was approved by ethics committee (1,771,159). Anthropometric evaluations of hip and waist circumference, body mass and height were performed. In order to verify the body composition, the relationship between stature and body mass (BM) was performed, generating the body mass index (BMI), as well as the dual-energy X-ray absorptiometry (DEXA) test. The Time Up and Go (TUG) test was used to evaluate the functional capacity, and a maximum repetition test (1MR) for knee extension and handgrip (HG) was applied for strength magnitude analysis. Statistical analysis was performed using the statistical package SPSS 22.0. Shapiro Wilk's normality test was performed. For the possible correlations, the suggestions of the Pearson or Spearman tests were adopted. The results obtained after the interpretation identified that the sample (n = 18) was composed of 66.7% of female individuals with mean age of 66.07 ± 8.95 years. The sample’s body fat percentage (%BF) (35.65 ± 10.73) exceeds the recommendations for age group, as well as the anthropometric parameters of hip (90.91 ± 8.44cm) and waist circumference (80.37 ± 17.5cm). The relationship between height (1.55 ± 0.1m) and body mass (63.44 ± 11.25Kg) generated a BMI of 24.16 ± 7.09Kg/m2, that was considered normal. The TUG performance was 10.71 ± 1.85s. In the 1MR test, 46.67 ± 13.06Kg and in the HG 23.93±7.96Kgf were obtained, respectively. Correlation analyzes were characterized by the high frequency of significant correlations for height, dominant arm mass (DAM), %BF, 1MR and HG variables. In addition, correlations between HG and BM (r = 0.67, p = 0.005), height (r = 0.51, p = 0.004) and DAM (r = 0.55, p = 0.026) were also observed. The strength of the lower limbs correlates with BM (r = 0.69, p = 0.003), height (r = 0.62, p = 0.01) and DAM (r = 0.772, p = 0.001). In this way, we can conclude that not only the simple spatial relationship of mass and height can influence in predictive parameters of strength or functionality, being important the verification of the conditions of the corporal composition. For this population, height seems to be a good predictor of strength and body composition.

Keywords: African Continental Ancestry Group, body composition, functional capacity, strength

Procedia PDF Downloads 254
228 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 173
227 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 27
226 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder

Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi

Abstract:

With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.

Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor

Procedia PDF Downloads 135
225 Place Attachment as Basic Condition for Wellbeing and Life Satisfaction in East African Wetland Users

Authors: Sophie-Bo Heinkel, Andrea Rechenburg, Thomas Kistemann

Abstract:

The current status of wellbeing and life satisfaction of subsistence farmers in a wetland in Uganda and the contributing role of place attachment has been assessed. The aim of this study is to shed light on environmental factors supporting wellbeing in a wetland setting. Furthermore, it has been assessed, how the emotional bonding to the wetland as ‘place’ influences the peoples’ wellbeing and life satisfaction. The results shed light on the human-environment-relationship. A survey was carried out in three communities in urban and rural areas in a wetland basin in Uganda. A sample (n=235) provided information about the attachment to the wetland, the participants’ relation to the place of their residence and their emotional wellbeing. The Wellbeing Index (WHO-5) was assessed as well as the Perceived Stress Scale (PSS-10) and Rosenberg’s Self-Esteem scale (RSE). Furthermore, the Satisfaction With Life Scale (SWLS) was applied as well as the Place Attachment Inventory (PAI), which consists of the two intertwined dimensions of place identity and place dependence. Beside this, binary indicators as ‘feeling save’ and ‘feeling comfortable’ and ‘enjoying to live at the place of residence’ have been assessed. A bivariate correlation analysis revealed a high interconnectivity between all metric scales. Especially, the subscale ‘place identity’ showed significances with all other scales. A cluster analysis revealed three groups, which differed in the perception of place-related indicators and their attachment to the wetland as well as the status of wellbeing. First, a cluster whose majority is dissatisfied with their lives, but mainly had a good status of emotional well-being. This group does not feel attached to the wetland and lives in a town. Comparably less persons of this group feel safe and comfortable at their place of residence. In the second cluster, persons feel highly attached to the wetland and identify with it. This group was characterized by the high number of persons preferring their current place of residence and do not consider moving. All persons feel well and satisfied with their lives. The third group of persons is mainly living in rural areas and feels highly attached to the wetland. They are satisfied with their lives, but only a small minority is in a good emotional state of wellbeing. The emotional attachment to a place influences life satisfaction and, indirectly, the emotional wellbeing. In the present study it could be shown that subsistence farmers are attached to the wetland, as it is the source of their livelihood. While those living in areas with a good infrastructure are less dependent on the wetland and, therefore, less attached to. This feeling also was mirrored in the perception of a place as being safe and comfortable. The identification with a place is crucial for the feeling of being at “home”. Subsistence farmers feel attached to the ecosystem, but they also might be exposed to environmental and social stressors influencing their short-term emotional wellbeing. The provision of place identity is an ecosystem service provided by wetlands, which supports the status of wellbeing in human beings.

Keywords: mental health, positive environments, quality of life, wellbeing

Procedia PDF Downloads 377
224 Production, Characterization and In vitro Evaluation of [223Ra]RaCl2 Nanomicelles for Targeted Alpha Therapy of Osteosarcoma

Authors: Yang Yang, Luciana Magalhães Rebelo Alencar, Martha Sahylí Ortega Pijeira, Beatriz da Silva Batista, Alefe Roger Silva França, Erick Rafael Dias Rates, Ruana Cardoso Lima, Sara Gemini-Piperni, Ralph Santos-Oliveira

Abstract:

Radium-²²³ dichloride ([²²³Rₐ]RₐCl₂) is an alpha particle-emitting radiopharmaceutical currently approved for the treatment of patients with castration-resistant prostate cancer, symptomatic bone metastases, and no known visceral metastatic disease. [²²³Rₐ]RₐCl₂ is bone-seeking calcium mimetic that bonds into the newly formed bone stroma, especially osteoblastic or sclerotic metastases, killing the tumor cells by inducing DNA breaks in a potent and localized manner. Nonetheless, the successful therapy of osteosarcoma as primary bone tumors is still a challenge. Nanomicelles are colloidal nanosystems widely used in drug development to improve blood circulation time, bioavailability, and specificity of therapeutic agents, among other applications. In addition, the enhanced permeability and retention effect of the nanosystems, and the renal excretion of the nanomicelles reported in most cases so far, are very attractive to achieve selective and increased accumulation in tumor site as well as to increase the safety of [²²³Rₐ]RₐCl₂ in the clinical routine. In the present work, [²²³Rₐ]RₐCl₂ nanomicelles were produced, characterized, in vitro evaluated, and compared with pure [²²³Rₐ]RₐCl2 solution using SAOS2 osteosarcoma cells. The [²²³Rₐ]RₐCl₂ nanomicelles were prepared using the amphiphilic copolymer Pluronic F127. The dynamic light scattering analysis of freshly produced [²²³Rₐ]RₐCl₂ nanomicelles demonstrated a mean size of 129.4 nm with a polydispersity index (PDI) of 0.303. After one week stored in the refrigerator, the mean size of the [²²³Rₐ]RₐCl₂ nanomicelles increased to 169.4 with a PDI of 0.381. Atomic force microscopy analysis of [223Rₐ]RₐCl₂ nanomicelles exhibited spherical structures whose heights reach 1 µm, suggesting the filling of 127-Pluronic nanomicelles with [²²³Rₐ]RₐCl₂. The viability assay with [²²³Rₐ]RₐCl₂ nanomicelles displayed a dose-dependent response as it was observed using pure [²²³Rₐ]RₐCl2. However, at the same dose, [²²³Rₐ]RₐCl₂ nanomicelles were 20% higher efficient in killing SAOS2 cells when compared with pure [²²³Rₐ]RₐCl₂. These findings demonstrated the effectiveness of the nanosystem validating the application of nanotechnology in targeted alpha therapy with [²²³Ra]RₐCl₂. In addition, the [²²³Rₐ]RaCl₂nanomicelles may be decorated and incorporated with a great variety of agents and compounds (e.g., monoclonal antibodies, aptamers, peptides) to overcome the limited use of [²²³Ra]RₐCl₂.

Keywords: nanomicelles, osteosarcoma, radium dichloride, targeted alpha therapy

Procedia PDF Downloads 95
223 The Role of the Corporate Social Responsibility in Poverty Reduction

Authors: M. Verde, G. Falzarano

Abstract:

The paper examines the connection between corporate social responsibility (CSR), capability approach and poverty reduction; in particular, the local employment development (LED) by way of CSR initiatives. The joint action of LED/CSR results in a win-win situation, not only for the enterprises but also for all the stakeholders involved; in this regard, subsidiarity and coordination between national and regional/local authorities are central to a socially-oriented market economy. In the first section, the CSR is analysed on the basis of its social function in the fight against poverty, as a 'capabilities deprivation'. In the central part, the attention is focused on the relationship between CSR and LED; ergo, on the role of the enterprises in fostering capabilities development (the employment). Besides, all the potential solutions are presented, stressing the possible combinations, in the last part. The benchmark is the enterprise as an economic and a social institution: the business should not be combined with profit merely, paying more attention to its sustainable impact and social contribution. In which way could it be possible? The answer is the CSR. The impact of CSR on poverty reduction is still little explored. The companies help to reduce poverty through economic contribution, human rights and social inclusion; hence, the business becomes an 'agent of development' in order to fight against 'inequality'. The starting point is the pyramid of social responsibility, where ethic and philanthropic responsibilities involve programmes and actions aimed at personal development of the individuals, improving human standard of living in all forms, including poverty, when people do not have a choice between different 'life options', ranging from level of education to employment. At this point, CSR comes into play and works on two dimensions: poverty reduction and poverty prevention, by means of a series of initiatives: first of all, job creation and precarious work reduction. Empowerment of the local actors, financial support and combination of top down and bottom up initiatives are some of CSR areas of activity. Several positive effects occur on individual levels of educations, access to capital, individual health status, empowerment of youth and woman, access to social networks and it was observed that these effects depend on the type of CSR strategy. Indeed, CSR programmes should take into account fundamental criteria, such as the transparency, the information about benefits, a coordination unit among institutions and more clear guidelines. In this way, the advantages to the corporate reputation and to the community translate into a better job matching on the labour market, inter alia. It is important to underline that the success depends on the specific measures of the areas in question, by adapting them to the local needs, in light of general principles and index; therefore, the concrete commitment of the all stakeholders involved is decisive in order to achieve the goals. The enterprise would represent a concrete contribution for the pursuit of sustainable development and for the dissemination of a social and well being awareness.

Keywords: capability approach, local employment development, poverty, social inclusion

Procedia PDF Downloads 109
222 Assessment of Natural Flood Management Potential of Sheffield Lakeland to Flood Risks Using GIS: A Case Study of Selected Farms on the Upper Don Catchment

Authors: Samuel Olajide Babawale, Jonathan Bridge

Abstract:

Natural Flood Management (NFM) is promoted as part of sustainable flood management (SFM) in response to climate change adaptation. Stakeholder engagement is central to this approach, and current trends are progressively moving towards a collaborative learning approach where stakeholder participation is perceived as one of the indicators of sustainable development. Within this methodology, participation embraces a diversity of knowledge and values underpinned by a philosophy of empowerment, equity, trust, and learning. To identify barriers to NFM uptake, there is a need for a new understanding of how stakeholder participation could be enhanced to benefit individual and community resilience within SFM. This is crucial in light of climate change threats and scientific reliability concerns. In contributing to this new understanding, this research evaluated the proposed interventions on six (6) UK NFM in a catchment known as the Sheffield Lakeland Partnership Area with reference to the Environment Agency Working with Natural Processes (WWNP) Potentials/Opportunities. Three of the opportunities, namely Run-off Attenuation Potential of 1%, Run-off Attenuation Potential of 3.3% and Riparian Woodland Potential, were modeled. In all the models, the interventions, though they have been proposed or already in place, are not in agreement with the data presented by EA WWNP. Findings show some institutional weaknesses, which are seen to inhibit the development of adequate flood management solutions locally with damaging implications for vulnerable communities. The gap in communication from practitioners poses a challenge to the implementation of real flood mitigating measures that align with the lead agency’s nationally accepted measures which are identified as not feasible by the farm management officers within this context. Findings highlight a dominant top-bottom approach to management with very minimal indication of local interactions. Current WWNP opportunities have been termed as not realistic by the people directly involved in the daily management of the farms, with less emphasis on prevention and mitigation. The targeted approach suggested by the EA WWNP is set against adaptive flood management and community development. The study explores dimensions of participation using the self-reliance and self-help approach to develop a methodology that facilitates reflections of currently institutionalized practices and the need to reshape spaces of interactions to enable empowered and meaningful participation. Stakeholder engagement and resilience planning underpin this research. The findings of the study suggest different agencies have different perspectives on “community participation”. It also shows communities in the case study area appear to be least influential, denied a real chance of discussing their situations and influencing the decisions. This is against the background that the communities are in the most productive regions, contributing massively to national food supplies. The results are discussed concerning practical implications for addressing interagency partnerships and conducting grassroots collaborations that empower local communities and seek solutions to sustainable development challenges. This study takes a critical look into the challenges and progress made locally in sustainable flood risk management and adaptation to climate change by the United Kingdom towards achieving the global 2030 agenda for sustainable development.

Keywords: natural flood management, sustainable flood management, sustainable development, working with natural processes, environment agency, run-off attenuation potential, climate change

Procedia PDF Downloads 52
221 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 95
220 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India

Authors: Rituparna Pal, Faiz Ahmed

Abstract:

Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.

Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters

Procedia PDF Downloads 153
219 A Comprehensive Approach to Create ‘Livable Streets’ in the Mixed Land Use of Urban Neighborhoods: A Case Study of Bangalore Street

Authors: K. C. Tanuja, Mamatha P. Raj

Abstract:

"People have always lived on streets. They have been the places where children first learned about the world, where neighbours met, the social centres of towns and cities, the rallying points for revolts, the scenes of repression. The street has always been the scene of this conflict, between living and access, between resident and traveller, between street life and the threat of death.” Livable Streets by Donald Appleyard. Urbanisation is happening rapidly all over the world. As population increasing in the urban settlements, its required to provide quality of life to all the inhabitants who live in. Urban design is a place making strategic planning. Urban design principles promote visualising any place environmentally, socially and economically viable. Urban design strategies include building mass, transit development, economic viability and sustenance and social aspects. Cities are wonderful inventions of diversity- People, things, activities, ideas and ideologies. Cities should be smarter and adjustable to present technology and intelligent system. Streets represent the community in terms of social and physical aspects. Streets are an urban form that responds to many issues and are central to urban life. Streets are for livability, safety, mobility, place of interest, economic opportunity, balancing the ecology and for mass transit. Urban streets are places where people walk, shop, meet and engage in different types of social and recreational activities which make urban community enjoyable. Streets knit the urban fabric of activities. Urban streets become livable with the introduction of social network enhancing the pedestrian character by providing good design features which in turn should achieve the minimal impact of motor vehicle use on pedestrians. Livable streets are the spatial definition to the public right of way on urban streets. Streets in India have traditionally been the public spaces where social life happened or created from ages. Streets constitute the urban public realm where people congregate, celebrate and interact. Streets are public places that can promote social interaction, active living and community identity. Streets as potential contributors to a better living environment, knitting together the urban fabric of people and places that make up a community. Livable streets or complete streets are making our streets as social places, roadways and sidewalks accessible, safe, efficient and useable for all people. The purpose of this paper is to understand the concept of livable street and parameters of livability on urban streets. Streets to be designed as the pedestrians are the main users and create spaces and furniture for social interaction which serves for the needs of the people of all ages and abilities. The problems of streets like congestion due to width of the street, traffic movement and adjacent land use and type of movement need to be redesigned and improve conditions defining the clear movement path for vehicles and pedestrians. Well-designed spatial qualities of street enhances the street environment, livability and then achieves quality of life to the pedestrians. A methodology been derived to arrive at the typologies in street design after analysis of existing situation and comparing with livable standards. It was Donald Appleyard‟s Livable Streets laid out the social effects on streets creating the social network to achieve Livable Streets.

Keywords: livable streets, social interaction, pedestrian use, urban design

Procedia PDF Downloads 121
218 Bio-Nanotechnology Approach of Nano-Size Iron Particles as Promising Iron Supplements: An Exploratory Study to Combat the Problems of Iron Fortification in Children and Pregnant Women of Rural India

Authors: Roshni Raha, Kavya P., Gayathri M.

Abstract:

India, with a humongous population, remains the world's poorest developing nation in terms of nutritional status, with iron deficiency anaemia (IDA) affecting the population. Despite efforts over the past decades, India's anaemia prevalence has not been reduced. Researchers are interested in developing therapies that will minimize the typical side effects of oral iron and optimize iron salts-based treatment through delivery methods based on the physiology of hepcidin regulation. However, they need to come up with iron therapies that will prevent making the infection worse. This article explores using bio-nanotechnology as the alternative, promising substitution of providing iron supplements for the treatment of diarrhoea and gut inflammation in kids and pregnant women. This article is an exploratory study using a literature survey and secondary research from review papers. In the realm of biotechnology, nanoparticles have become extremely famous due to unexpected variations in surface characteristics caused by particle size. Particle size distribution and shape exhibit unusual, enhanced characteristics when reduced to nanoscale. The article attempts to develop a model for a nanotechnology based solution in iron fortification to combat the problems of diarrhoea and gut inflammation. Certain dimensions that have been considered in the model include the size, shape, source, and biosynthesis of the iron nanoparticles. Another area of investigation addressed in the article is the cost-effective biocompatible production of these iron nanoparticles. Studies have demonstrated that a substantial reduction of metal ions to form nanoparticles from the bulk metal occurs in plants because of the presence of a wide diversity of biomolecules. Using this concept, the paper investigates the effectiveness and impact of how similar sources can be used for the biological synthesis of iron nanoparticles. Results showed that iron particles, when prepared in nano-metre size, offer potential advantages. When the particle size of the iron compound decreases and attains nano configuration, its surface area increases, which further improves its solubility in the gastric acid, leading to higher absorption, higher bioavailability, and producing the least organoleptic changes in food. It has no negative effects and possesses a safe, effective profile to reduce IDA. Considering all the parameters, it has been concluded that iron particles in nano configuration serve as alternative iron supplements for the complete treatment of IDA. Nanoparticles of ferric phosphate, ferric pyrophosphate, and iron oxide are the choices of iron supplements. From a sourcing perspective, the paper concludes green sources are the primary sources for the biological synthesis of iron nanoparticles. It will also be a cost-effective strategy since our goal is to treat the target population in rural India. Bio-nanotechnology serves as an alternative and promising substitution for iron supplements due to its low cost, excellent bioavailability, and strong organoleptic properties. One area of future research can be to explore the type of size and shape of iron nanoparticles that would be suitable for the different age groups of pregnant women and children and whether it would be influenced based on the topography in certain areas.

Keywords: anemia, bio-nanotechnology, iron-fortification, nanoparticle

Procedia PDF Downloads 49
217 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 101
216 Triple Case Phantom Tumor of Lungs

Authors: Angelis P. Barlampas

Abstract:

Introduction: The term phantom lung mass describes the ovoid collection of fluid within the interlobular fissure, which initially creates the impression of a mass. The problem of correct differential diagnosis is great, especially in plain radiography. A case is presented with three nodular pulmonary foci, the shape, location, and density of which, as well as the presence of chronic loculated pleural effusions, suggest the presence of multiple phantom tumors of the lung. Purpose: The aim of this paper is to draw the attention of non-experienced and non-specialized physicians to the existence of benign findings that mimic pathological conditions and vice versa. The careful study of a radiological examination and the comparison with previous exams or further control protect against quick wrong conclusions. Methods: A hospitalized patient underwent a non-contrast CT scan of the chest as part of the general control of her situation. Results: Computed tomography revealed pleural effusions, some of them loculated, increased cardiothoracic index, as well as the presence of three nodular foci, one in the left lung and two in the right with a maximum density of up to 18 Hounsfield units and a mean diameter of approximately five centimeters. Two of them are located in the characteristical anatomical position of the major interlobular fissure. The third one is located in the area of the right lower lobe’s posterior basal part, and it presents the same characteristics as the previous ones and is likely to be a loculated fluid collection, within an auxiliary interlobular fissure or a cyst, in the context of the patient's more general pleural entrapments and loculations. The differential diagnosis of nodular foci based on their imaging characteristics includes the following: a) rare metastatic foci with low density (liposarcoma, mucous tumors of the digestive or genital system, necrotic metastatic foci, metastatic renal cancer, etc.), b) necrotic multiple primary lung tumor locations (squamous epithelial cancer, etc. ), c) hamartomas of the lung, d) fibrotic tumors of the interlobular fissures, e) lipoid pneumonia, f) fluid concentrations within the interlobular fissures, g) lipoma of the lung, h) myelolipomas of the lung. Conclusions: The collection of fluid within the interlobular fissure of the lung can give the false impression of a lung mass, particularly on plain chest radiography. In the case of computed tomography, the ability to measure the density of a lesion, combined with the provided high anatomical details of the location and characteristics of the lesion, can lead relatively easily to the correct diagnosis. In cases of doubt or image artifacts, comparison with previous or subsequent examinations can resolve any disagreements, while in rare cases, intravenous contrast may be necessary.

Keywords: phantom mass, chest CT, pleural effusion, cancer

Procedia PDF Downloads 33
215 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements

Authors: Mohammad R. Bhuyan, Mohammad J. Khattak

Abstract:

Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.

Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement

Procedia PDF Downloads 142
214 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study

Authors: Kaaryn Cater

Abstract:

People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.

Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity

Procedia PDF Downloads 43
213 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 318
212 Adaptability in Older People: A Mixed Methods Approach

Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger

Abstract:

Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.

Keywords: adaptability, intervention study, mixed methods, nursing home residents

Procedia PDF Downloads 124