Search results for: common words
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6775

Search results for: common words

1105 Influence of 3D Printing Parameters on Surface Finish of Ceramic Hip Prostheses Fixed by Means of Osteointegration

Authors: Irene Buj-Corral, Ali Bagheri, Alejandro Dominguez-Fernandez

Abstract:

In recent years, use of ceramic prostheses as an implant in some parts of body has become common. In the present study, research has focused on replacement of the acetabulum bone, which is a part of the pelvis bone. Metallic prostheses have shown some problems such as release of metal ions into patient's blood. In addition, fracture of liners and squeezing between surface of femoral head and inner surface of acetabulum have been reported. Ceramic prostheses have the advantage of low debris and high strength, although they are more difficult to be manufactured than metallic ones. Specifically, new designs try to attempt an acetabulum in which the outer surface will be porous for proliferation of cells and fixation of the prostheses by means of osteointegration, while inner surface must be smooth enough to assure that the movement between femoral head and inner surface will be carried out with on feasibility. In the present study, 3D printing technologies are used for manufacturing ceramic prostheses. In Fused Deposition Modelling (FDM) process, 3D printed plastic prostheses are obtained by means of melting of a plastic filament and subsequent deposition on a glass surface. A similar process is applied to ceramics in which ceramic powders need to be mixed with a liquid polymer before depositing them. After 3D printing, parts are subjected to a sintering process in an oven so that they can achieve final strength. In the present paper, influence of printing parameters on surface roughness 3D printed ceramic parts are presented. Three parameter full factorial design of experiments was used. Selected variables were layer height, infill and nozzle diameter. Responses were average roughness Ra and mean roughness depth Rz. Regression analysis was applied to responses in order to obtain mathematical models for responses. Results showed that surface roughness depends mainly on layer height and nozzle diameter employed, while infill was found not to be significant. In order to get low surface roughness, low layer height and low infill should be selected. As a conclusion, layer height and infill are important parameters for obtaining good surface finish in ceramic 3D printed prostheses. However, use of too low infill could lead to prostheses with low mechanical strength. Such prostheses could not be able to bear the static and dynamic charges to which they are subjected once they are implanted in the body. This issue will be addressed in further research.

Keywords: ceramic, hip prostheses, surface roughness, 3D printing

Procedia PDF Downloads 196
1104 Characterization of the Blood Microbiome in Rheumatoid Arthritis Patients Compared to Healthy Control Subjects Using V4 Region 16S rRNA Sequencing

Authors: D. Hammad, D. P. Tonge

Abstract:

Rheumatoid arthritis (RA) is a disabling and common autoimmune disease during which the body's immune system attacks healthy tissues. This results in complicated and long-lasting actions being carried out by the immune system, which typically only occurs when the immune system encounters a foreign object. In the case of RA, the disease affects millions of people and causes joint inflammation, ultimately leading to the destruction of cartilage and bone. Interestingly, the disease mechanism still remains unclear. It is likely that RA occurs as a result of a complex interplay of genetic and environmental factors including an imbalance in the microorganism population inside our body. The human microbiome or microbiota is an extensive community of microorganisms in and on the bodies of animals, which comprises bacteria, fungi, viruses, and protozoa. Recently, the development of molecular techniques to characterize entire bacterial communities has renewed interest in the involvement of the microbiome in the development and progression of RA. We believe that an imbalance in some of the specific bacterial species in the gut, mouth and other sites may lead to atopobiosis; the translocation of these organisms into the blood, and that this may lead to changes in immune system status. The aim of this study was, therefore, to characterize the microbiome of RA serum samples in comparison to healthy control subjects using 16S rRNA gene amplification and sequencing. Serum samples were obtained from healthy control volunteers and from patients with RA both prior to, and following treatment. The bacterial community present in each sample was identified utilizing V4 region 16S rRNA amplification and sequencing. Bacterial identification, to the lowest taxonomic rank, was performed using a range of bioinformatics tools. Significantly, the proportions of the Lachnospiraceae, Ruminococcaceae, and Halmonadaceae families were significantly increased in the serum of RA patients compared with healthy control serum. Furthermore, the abundance of Bacteroides and Lachnospiraceae nk4a136_group, Lachnospiraceae_UGC-001, RuminococcaceaeUCG-014, Rumnococcus-1, and Shewanella was also raised in the serum of RA patients relative to healthy control serum. These data support the notion of a blood microbiome and reveal RA-associated changes that may have significant implications for biomarker development and may present much-needed opportunities for novel therapeutic development.

Keywords: blood microbiome, gut and oral bacteria, Rheumatoid arthritis, 16S rRNA gene sequencing

Procedia PDF Downloads 130
1103 Optical and Surface Characteristics of Direct Composite, Polished and Glazed Ceramic Materials After Exposure to Tooth Brush Abrasion and Staining Solution

Authors: Maryam Firouzmandi, Moosa Miri

Abstract:

Aim and background: esthetic and structural reconstruction of anterior teeth may require the application of different restoration material. In this regard combination of direct composite veneer and ceramic crown is a common treatment option. Despite the initial matching, their long term harmony in term of optical and surface characteristics is a matter of concern. The purpose of this study is to evaluate and compare optical and surface characteristic of direct composite polished and glazed ceramic materials after exposure to tooth brush abrasion and staining solution. Materials and Methods: ten 2 mm thick disk shape specimens were prepared from IPS empress direct composite and twenty specimens from IPS e.max CAD blocks. Composite specimens and ten ceramic specimens were polished by using D&Z composite and ceramic polishing kit. The other ten specimens of ceramic were glazed with glazing liquid. Baseline measurement of roughness, CIElab coordinate, and luminance were recorded. Then the specimens underwent thermocycling, tooth brushing, and coffee staining. Afterword, the final measurements were recorded. Color coordinate were used to calculate ΔE76, ΔE00, translucency parameter, and contrast ratio. Data were analyzed by One-way ANOVA and post hoc LSD test. Results: baseline and final roughness of the study group were not different. At baseline, the order of roughness for the study group were as follows: composite < glazed ceramic < polished ceramic, but after aging, no difference. Between ceramic groups was not detected. The comparison of baseline and final luminance was similar to roughness but in reverse order. Unlike differential roughness which was comparable between the groups, changes in luminance of the glazed ceramic group was higher than other groups. ΔE76 and ΔE00 in the composite group were 18.35 and 12.84, in the glazed ceramic group were 1.3 and 0.79, and in polished ceramic were 1.26 and 0.85. These values for the composite group were significantly different from ceramic groups. Translucency of composite at baseline was significantly higher than final, but there was no significant difference between these values in ceramic groups. Composite was more translucency than ceramic at baseline and final measurement. Conclusion: Glazed ceramic surface was smoother than polished ceramic. Aging did not change the roughness. Optical properties (color and translucency) of the composite were influenced by aging. Luminance of composite, glazed ceramic, and polished ceramic decreased after aging, but the reduction in glazed ceramic was more pronounced.

Keywords: ceramic, tooth-brush abrasion, staining solution, composite resin

Procedia PDF Downloads 184
1102 Barriers and Facilitators of Community Based Mental Health Intervention (CMHI) in Rural Bangladesh: Findings from a Descriptive Study

Authors: Rubina Jahan, Mohammad Zayeed Bin Alam, Sazzad Chowdhury, Sadia Chowdhury

Abstract:

Access to mental health services in Bangladesh is a tale of urban privilege and rural struggle. Mental health services in the country are primarily centered in urban medical hospitals, with only 260 psychiatrists for a population of more than 162 million, while rural populations face far more severe and daunting challenges. In alignment with the World Health Organization's perspective on mental health as a basic human right and a crucial component for personal, community, and socioeconomic development; SAJIDA Foundation a value driven non-government organization in Bangladesh has introduced a Community Based Mental Health (CMHI) program to fill critical gaps in mental health care, providing accessible and affordable community-based services to protect and promote mental health, offering support for those grappling with mental health conditions. The CMHI programme is being implemented in 3 districts in Bangladesh, 2 of them are remote and most climate vulnerable areas targeting total 6,797 individual. The intervention plan involves a screening of all participants using a 10-point vulnerability assessment tool to identify vulnerable individuals. The assumption underlying this is that individuals assessed as vulnerable is primarily due to biological, psychological, social and economic factors and they are at an increased risk of developing common mental health issues. Those identified as vulnerable with high risk and emergency conditions will receive Mental Health First Aid (MHFA) and undergo further screening with GHQ-12 to be identified as cases and non-cases. The identified cases are then referred to community lay counsellors with basic training and knowledge in providing 4-6 sessions on problem solving or behavior activation. In situations where no improvement occurs post lay counselling or for individuals with severe mental health conditions, a referral process will be initiated, directing individuals to ensure appropriate mental health care. In our presentation, it will present the findings from 6-month pilot implementation focusing on the community-based screening versus outcome of the lay counseling session and barriers and facilitators of implementing community based mental health care in a resource constraint country like Bangladesh.

Keywords: community-based mental health, lay counseling, rural bangladesh, treatment gap

Procedia PDF Downloads 41
1101 The Influence of the Soil in the Vegetation of the Luki Biosphere Reserve in the Democratic Republic of Congo

Authors: Sarah Okende

Abstract:

It is universally recognized that the forests of the Congo Basin remain a common good and a complex ecosystem, and insufficiently known. Historically and throughout the world, forests have been valued for the multiple products and benefits they provide. In addition to their major role in the conservation of global biodiversity and in the fight against climate change, these forests also have an essential role in the regional and global ecology. This is particularly the case of the Luki Biosphere Reserve, a highly diversified evergreen Guinean-Congolese rainforest. Despite the efforts of sustainable management of the said reserve, the understanding of the place occupied by the soil under the influence of the latter does not seem to be an interesting subject for the general public or even scientists. The Luki biosphere reserve is located in the west of the DRC, more precisely in the south-east of Mayombe Congolais, in the province of Bas-Congo. The vegetation of the Luki Biosphere Reserve is very heterogeneous and diversified. It ranges from grassy formations to semi-evergreen dense humid forests, passing through edaphic formations on hydromorphic soils (aquatic and semi-aquatic vegetation; messicole and segetal vegetation; gascaricole vegetation; young secondary forests with Musanga cercropioides, Xylopia aethiopica, Corynanthe paniculata; mature secondary forests with Terminalia superba and Hymenostegia floribunda; primary forest with Prioria balsamifera; climax forests with Gilbertiodendron dewevrei, and Gilletiodendron kisantuense). Field observations and reading of previous and up-to-date work carried out in the Luki biosphere reserve are the methodological approaches for this study, the aim of which is to show the impact of soil types in determining the varieties of vegetation. The results obtained prove that the four different types of soil present (purplish red soils, developing on amphibolites; red soils, developed on gneisses; yellow soils occurring on gneisses and quartzites; and alluvial soils, developed on recent alluvium) have a major influence apart from other environmental factors on the determination of different facies of the vegetation of the Luki Biosphere Reserve. In conclusion, the Luki Biosphere Reserve is characterized by a wide variety of biotopes determined by the nature of the soil, the relief, the microclimates, the action of man, or the hydrography. Overall management (soil, biodiversity) in the Luki Biosphere Reserve is important for maintaining the ecological balance.

Keywords: soil, biodiversity, forest, Luki, rainforest

Procedia PDF Downloads 82
1100 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 463
1099 Valorization of Seafood and Poultry By-Products as Gelatin Source and Quality Assessment

Authors: Elif Tugce Aksun Tumerkan, Umran Cansu, Gokhan Boran, Fatih Ozogul

Abstract:

Gelatin is a mixture of peptides obtained from collagen by partial thermal hydrolysis. It is an important and useful biopolymer that is used in the food, pharmacy, and photography products. Generally, gelatins are sourced from pig skin and bones, beef bone and hide, but within the last decade, using alternative gelatin resources has attracted some interest. In this study, functional properties of gelatin extracted from seafood and poultry by-products were evaluated. For this purpose, skins of skipjack tuna (Katsuwonus pelamis) and frog (Rana esculata) were used as seafood by-products and chicken skin as poultry by-product as raw material for gelatin extraction. Following the extraction of gelatin, all samples were lyophilized and stored in plastic bags at room temperature. For comparing gelatins obtained; chemical composition, common quality parameters including bloom value, gel strength, and viscosity in addition to some others like melting and gelling temperatures, hydroxyproline content, and colorimetric parameters were determined. The results showed that the highest protein content obtained in frog gelatin with 90.1% and the highest hydroxyproline content was in chicken gelatin with 7.6% value. Frog gelatin showed a significantly higher (P < 0.05) melting point (42.7°C) compared to that of fish (29.7°C) and chicken (29.7°C) gelatins. The bloom value of gelatin from frog skin was found higher (363 g) than chicken and fish gelatins (352 and 336 g, respectively) (P < 0.05). While fish gelatin had higher lightness (L*) value (92.64) compared to chicken and frog gelatins, redness/greenness (a*) value was significantly higher in frog skin gelatin. Based on the results obtained, it can be concluded that skins of different animals with high commercial value may be utilized as alternative sources to produce gelatin with high yield and desirable functional properties. Functional and quality analysis of gelatin from frog, chicken, and tuna skin showed by-product of poultry and seafood can be used as an alternative gelatine source to mammalian gelatine. The functional properties, including bloom strength, melting points, and viscosity of gelatin from frog skin were more admirable than that of the chicken and tuna skin. Among gelatin groups, significant characteristic differences such as gel strength and physicochemical properties were observed based on not only raw material but also the extraction method.

Keywords: chicken skin, fish skin, food industry, frog skin, gel strength

Procedia PDF Downloads 161
1098 The Removal of Common Used Pesticides from Wastewater Using Golden Activated Charcoal

Authors: Saad Mohamed Elsaid Onaizah

Abstract:

One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use trated activated charcoal with gold nitrate solution; For the purpose of removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption forming complex with the gold metal immobilised on activated carbon surfaces. Also, gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.

Keywords: waste water, pesticides pollution, adsorption, activated carbon

Procedia PDF Downloads 78
1097 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case

Authors: Andrea Ando

Abstract:

The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.

Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law

Procedia PDF Downloads 88
1096 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises

Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska

Abstract:

Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.

Keywords: safety climate, occupational health, civil engineering, productivity

Procedia PDF Downloads 318
1095 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian

Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu

Abstract:

Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.

Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS

Procedia PDF Downloads 220
1094 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
1093 A Systematic Review on Factors/Predictors and Outcomes of Parental Distress in Childhood Acute Lymphoblastic Leukemia

Authors: Ana Ferraz, Martim Santos, M. Graça Pereira

Abstract:

Distress among parents of children with acute lymphoblastic leukemia (ALL) is common during treatment and can persist several years post-diagnosis, impacting the adjustment of children and parents themselves. Current evidence is needed to examine the scope and nature of parental distress in childhood ALL. This review focused on associated variables, predictors, and outcomes of parental distress following their ALL diagnosis of their child. PubMed, Web of Science, and PsycINFO databases were searched for English and Spanish papers published from 1983 to 2021. PRISMA statement was followed, and papers were evaluated through a standardized methodological quality assessment tool (NHLBI). Of the 28 papers included, 16 were evaluated as fair, eight as good, and four as poor. Regarding results, 11 papers reported subgroup differences, and 15 found potential predictors of parental distress, including sociodemographic, psychosocial, psychological, family, health, and ALL-specific variables. Significant correlations were found between parental distress, social support, illness cognitions, and resilience, as well as contradictory results regarding the impact of sociodemographic variables on parental distress. Family cohesion and caregiver burden were associated with distress, and the use of healthy coping strategies was associated with less anxiety. Caregiver strain contributed to distress, and the overall impact of illness positively predicted anxiety in mothers and somatization in fathers. Differences in parental distress were found regarding group risk, time since diagnosis, and treatment phases. Thirteen papers explored the outcomes of parental distress on psychological, family, health, and social/education outcomes. Parental distress was the most important predictor of family strain. Significant correlations were found between parental distress at diagnosis and further psychological adjustment of parents themselves and their children. Most papers reported correlations between parental distress on children’s adjustment and quality of life, although few studies reported no association. Correlations between maternal depression and child participation in education and social life were also found. Longitudinal studies are needed to better understand parental distress and its consequences on health outcomes, in particular. Future interventions should focus mainly on parents on distress reduction and psychological adjustment, both in parents and children over time.

Keywords: childhood acute lymphoblastic leukemia, family, parental distress, psychological adjustment, quality of life

Procedia PDF Downloads 107
1092 Enhancing the Aussie Optimism Positive Thinking Skills Program: Short-term Effects on Anxiety and Depression in Youth aged 9-11 Years Old

Authors: Rosanna M. Rooney, Sharinaz Hassan, Maryanne McDevitt, Jacob D. Peckover, Robert T. Kane

Abstract:

Anxiety and depression are the most common mental health problems experienced by Australian children and adolescents. Research into youth mental health points to the importance of considering emotional competence, parental influence on the child’s emotional development, and the fact that cognitions are still developing in childhood when designing and implementing positive psychology interventions. Additionally, research into such interventions has suggested the inclusion of a coaching component aimed at supporting those implementing the intervention enhances the effects of the intervention itself. In light of these findings and given the burden of anxiety and depression in the longer term, it is necessary to enhance the Aussie Optimism Positive Thinking Skills program and evaluate its efficacy in terms of children’s mental health outcomes. It was expected that the enhancement of the emotional and cognitive aspects of the Aussie Optimism Positive Thinking Skills program, the addition of coaching, and the inclusion of a parent manual would lead to significant prevention effects in internalizing problems at post-test, 6- and 18-months after the completion of the intervention. 502 students (9-11 years old) were randomly assigned to the intervention group (n = 347) or control group (n = 155). At each time point (baseline, post-test, 6-month follow-up, and 18-month follow-up), students completed a battery of self-report measures. The ten intervention sessions making up the enhanced Aussie Optimism Positive Thinking Skills program were run weekly. At post-test and 6-month follow-up, the intervention group reported significantly lower depression than the control group, with no group differences at the 18-month follow-up. The intervention group reported significantly lower anxiety than the control group only at the 6-month follow-up, with no group differences in the post-test or at the 18-month follow-up. Results suggest that the enhanced Aussie Optimism Positive Thinking Skills program can reduce depressive and anxious symptoms in the short term and highlight the importance of universally implemented positive psychology interventions.

Keywords: positive psychology, emotional competence, internalizing symptoms, universal implementation

Procedia PDF Downloads 68
1091 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 258
1090 Isolation, Characterization, and Antibacterial Evaluation of Antimicrobial Peptides and Derivatives from Fly Larvae Sarconesiopsis magellanica (Diptera: Calliphoridae)

Authors: A. Díaz-Roa, P. I. Silva Junior, F. J. Bello

Abstract:

Sarconesiopsis magellanica (Diptera: Calliphoridae) is a medically important necrophagous fly which is used for establishing the post-mortem interval. Dipterous maggots release diverse proteins and peptides contained in larval excretion and secretion (ES) products playing a key role in digestion. The most important mechanism for combating infection using larval therapy depends on larval ES. These larvae are protected against infection by a diverse spectrum of antimicrobial peptides (AMPs), one already known like lucifensin. Special interest in these peptides has also been aroused regarding understanding their role in wound healing since they degrade necrotic tissue and kill different bacteria during larval therapy. The action of larvae on wounds occurs through 3 mechanisms of action: removal of necrotic tissue, stimulation of granulation tissue, and antibacterial action of larval ES. Some components of the ES include calcium, urea, allantoin ammonium bicarbonate and reducing the viability of Gram positive and Gram negative bacteria. The Lucilia sericata fly larvae have been the most used, however, we need to evaluate new species that could potentially be similar or more effective than fly above. This study was thus aimed at identifying and characterizing S. magellanica AMPs contained in ES products for the first time and compared them with the common fly used L. sericata. These products were obtained from third-instar larvae taken from a previously established colony. For the first analysis, ES fractions were separate by Sep-Pak C18 disposable columns (first step). The material obtained was fractionated by RP-HPLC by using Júpiter C18 semi-preparative column. The products were then lyophilized and their antimicrobial activity was characterized by incubation with different bacterial strains. The first chromatographic analysis of ES from L. sericata gives 6 fractions with antimicrobial activity against Gram-positive bacteria Micrococus luteus, and 3 fractions with activity against Gram-negative bacteria Pseudomonae aeruginosa while the one from S. magellanica gaves 1 fraction against M. luteus and 4 against P. aeruginosa. Maybe one of these fractions could correspond to the peptide already known from L. sericata. These results show the first work for supporting further experiments aimed at validating S. magellanica use in larval therapy. We still need to search if we find some new molecules, by making mass spectrometry and ‘de novo sequencing’. Further studies are necessary to identify and characterize them to better understand their functioning.

Keywords: antimicrobial peptides, larval therapy, Lucilia sericata, Sarconesiopsis magellanica

Procedia PDF Downloads 366
1089 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence

Authors: Meghan E. Murray, Carri R. Tolmie

Abstract:

This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.

Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership

Procedia PDF Downloads 126
1088 Overview of Environmental and Economic Theories of the Impact of Dams in Different Regions

Authors: Ariadne Katsouras, Andrea Chareunsy

Abstract:

The number of large hydroelectric dams in the world has increased from almost 6,000 in the 1950s to over 45,000 in 2000. Dams are often built to increase the economic development of a country. This can occur in several ways. Large dams take many years to build so the construction process employs many people for a long time and that increased production and income can flow on into other sectors of the economy. Additionally, the provision of electricity can help raise people’s living standards and if the electricity is sold to another country then the money can be used to provide other public goods for the residents of the country that own the dam. Dams are also built to control flooding and provide irrigation water. Most dams are of these types. This paper will give an overview of the environmental and economic theories of the impact of dams in different regions of the world. There is a difference in the degree of environmental and economic impacts due to the varying climates and varying social and political factors of the regions. Production of greenhouse gases from the dam’s reservoir, for instance, tends to be higher in tropical areas as opposed to Nordic environments. However, there are also common impacts due to construction of the dam itself, such as, flooding of land for the creation of the reservoir and displacement of local populations. Economically, the local population tends to benefit least from the construction of the dam. Additionally, if a foreign company owns the dam or the government subsidises the cost of electricity to businesses, then the funds from electricity production do not benefit the residents of the country the dam is built in. So, in the end, the dams can benefit a country economically, but the varying factors related to its construction and how these are dealt with, determine the level of benefit, if any, of the dam. Some of the theories or practices used to evaluate the potential value of a dam include cost-benefit analysis, environmental impacts assessments and regressions. Systems analysis is also a useful method. While these theories have value, there are also possible shortcomings. Cost-benefit analysis converts all the costs and benefits to dollar values, which can be problematic. Environmental impact assessments, likewise, can be incomplete, especially if the assessment does not include feedback effects, that is, they only consider the initial impact. Finally, regression analysis is dependent on the available data and again would not necessarily include feedbacks. Systems analysis is a method that can allow more complex modelling of the environment and the economic system. It would allow a clearer picture to emerge of the impacts and can include a long time frame.

Keywords: comparison, economics, environment, hydroelectric dams

Procedia PDF Downloads 196
1087 Effects of Concomitant Use of Metformin and Powdered Moringa Oleifera Leaves on Glucose Tolerance in Sprague-Dawley Rats

Authors: Emielex M. Aguilar, Kristen Angela G. Cruz, Czarina Joie L. Rivera, Francis Dave C. Tan, Gavino Ivan N. Tanodra, Dianne Katrina G. Usana, Mary Grace T. Valentin, Nico Albert S. Vasquez, Edwin Monico C. Wee

Abstract:

The risk of diabetes mellitus is increasing in the Philippines, with Metformin and Insulin as drugs commonly used for its management. The use of herbal medicines has grown increasingly, especially among the elderly population. Moringa oleifera or malunggay is one of the most common plants in the country, and several studies have shown the plant to exhibit a hypoglycemic property with its flavonoid content. This study aims to investigate the possible effects of concomitant use of Metformin and powdered M. oleifera leaves (PMOL) on blood glucose levels. Twenty male Sprague-Dawley rats were equally distributed into four groups. Fasting blood glucose levels of the rats were measured prior to experimentation. The following treatments were administered to the four groups, respectively: glucose only 2 g/kg; glucose 2 g/kg + Metformin 100 mg/kg; glucose 2 g/kg + PMOL 200 mg/kg; and glucose 2 g/kg + PMOL 200 mg/kg and Metformin 100 mg/kg. Blood glucose levels were determined on the 1st, 2nd, 3rd, and 4th hour post-treatment and compared between groups. Statistical analysis showed that the type of intervention did not show significance in the reduction of blood glucose levels when compared with the other groups (p=0.378), while the effect of time exhibited significance (p=0.000). The interaction between the type of intervention and time of blood glucose measurement was shown to be significant (p=0.024). Within each group, the control and PMOL-treated groups showed significant reduction in blood glucose levels over time with p-values of 0.000 and 0.000, respectively, while the Metformin-treated and the combination groups had p-values of 0.062 and 0.093, respectively, which are not significant. The descriptive data also showed that the mean total reduction of blood glucose levels of the Metformin and PMOL combination treatment group was lower than the PMOL-treated group alone, while the mean total reduction of blood glucose levels of the combination group was higher than the Metformin-treated group alone. Based on the results obtained, the combination of Metformin and PMOL did not significantly lower the blood glucose levels of the rats as compared to the other groups. However, the concomitant use of Metformin and PMOL may affect each other’s blood glucose lowering activity. Additionally, prolonged time of exposure and delay in the first blood glucose measurement after treatment could exhibit a significant effect in the blood glucose levels. Further studies are recommended regarding the effects of the concomitant use of the two agents on blood glucose levels.

Keywords: blood glucose levels, concomitant use, metformin, Moringa oleifera

Procedia PDF Downloads 412
1086 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources

Authors: Alana E. Jansen

Abstract:

With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.

Keywords: diversity, high-performing teams, job demands and resources, team effectiveness

Procedia PDF Downloads 186
1085 Computational Simulations and Assessment of the Application of Non-Circular TAVI Devices

Authors: Jonathon Bailey, Neil Bressloff, Nick Curzen

Abstract:

Transcatheter Aortic Valve Implantation (TAVI) devices are stent-like frames with prosthetic leaflets on the inside, which are percutaneously implanted. The device in a crimped state is fed through the arteries to the aortic root, where the device frame is opened through either self-expansion or balloon expansion, which reveals the prosthetic valve within. The frequency at which TAVI is being used to treat aortic stenosis is rapidly increasing. In time, TAVI is likely to become the favoured treatment over Surgical Valve Replacement (SVR). Mortality after TAVI has been associated with severe Paravalvular Aortic Regurgitation (PAR). PAR occurs when the frame of the TAVI device does not make an effective seal against the internal surface of the aortic root, allowing blood to flow backwards about the valve. PAR is common in patients and has been reported to some degree in as much as 76% of cases. Severe PAR (grade 3 or 4) has been reported in approximately 17% of TAVI patients resulting in post-procedural mortality increases from 6.7% to 16.5%. TAVI devices, like SVR devices, are circular in cross-section as the aortic root is often considered to be approximately circular in shape. In reality, however, the aortic root is often non-circular. The ascending aorta, aortic sino tubular junction, aortic annulus and left ventricular outflow tract have an average ellipticity ratio of 1.07, 1.09, 1.29, and 1.49 respectively. An elliptical aortic root does not severely affect SVR, as the leaflets are completely removed during the surgical procedure. However, an elliptical aortic root can inhibit the ability of the circular Balloon-Expandable (BE) TAVI devices to conform to the interior of the aortic root wall, which increases the risk of PAR. Self-Expanding (SE) TAVI devices are considered better at conforming to elliptical aortic roots, however the valve leaflets were not designed for elliptical function, furthermore the incidence of PAR is greater in SE devices than BE devices (19.8% vs. 12.2% respectively). If a patient’s aortic root is too severely elliptical, they will not be suitable for TAVI, narrowing the treatment options to SVR. It therefore follows that in order to increase the population who can undergo TAVI, and reduce the risk associated with TAVI, non-circular devices should be developed. Computational simulations were employed to further advance our understanding of non-circular TAVI devices. Radial stiffness of the TAVI devices in multiple directions, frame bending stiffness and resistance to balloon induced expansion are all computationally simulated. Finally, a simulation has been developed that demonstrates the expansion of TAVI devices into a non-circular patient specific aortic root model in order to assess the alterations in deployment dynamics, PAR and the stresses induced in the aortic root.

Keywords: tavi, tavr, fea, par, fem

Procedia PDF Downloads 437
1084 Effect of a Synthetic Platinum-Based Complex on Autophagy Induction in Leydig TM3 Cells

Authors: Ezzati Givi M., Hoveizi E., Nezhad Marani N.

Abstract:

Platinum-based anticancer therapeutics are the most widely used drugs in clinical chemotherapy but have major limitations and various side effects in clinical applications. Gonadotoxicity and sterility is one of the most common complications for cancer survivors, which seem to be drug-specific and dose-related. Therefore, many efforts have been dedicated to discovering a new structure of platinum-based anticancer agents with improved therapeutic index, fewer side effects. In this regard, new Pt(II)-phosphane complexes containing heterocyclic thionate ligands (PCTL) have been synthesized, which show more potent antitumor activities in comparison to cisplatin. Cisplatin, the best leading metal-based antitumor drug in the field, induces testicular toxicity on Leydig and Sertoli cells leading to serious side effects such as azoospermia and infertility. Therefore in the present study, we aimed to investigate the cytotoxicity effect of PCTL on mice TM4 Sertoli cells with particular emphasis on the role of autophagy in comparison to cisplatin. In this study, an MTT assay was performed to evaluate the IC50 of PCTL and to analyze the TM3 Leydig cell's viability. Cells morphology was evaluated via invert microscope and Changing in morphology for nuclei swelling or autophagic vacuoles formation were assessed by DAPI and MDC staining. Testosterone production in the culture medium was measured using an ELISA kit. Finally, the expression of Autophagy-related genes, Atg5, Beclin1 and p62, were analyzed by qPCR. Based on the obtained results by MTT, the IC50 value of PCTL was 50 μM in TM3 cells and cytotoxic effects was in a dose- and time-dependent manner. Cells morphological changes investigated by inverted microscopy, DAPI, and MDC staining which showed the cytotoxic concentrations of PCTL was significantly higher than cisplatin in the treated TM3 Leydig cells. The results of PCR showed a lack of expression of the p62, Atg5 and Beclin1 gene in TM3 cells treated with PCTL in comparison to cisplatin and control groups. It should be noted that the effects of 25 μM PCTL concentration on TM3 cells have been associated with increased testosterone production and secretion, which requires further study to explain the possible causes and involved molecular mechanisms. The results of the study showed that the PCTL had less-lethal effects on TM3 cells in comparison to cisplatin and probably did not induce autophagy in TM3 cells.

Keywords: platinum-based anticancer agents, cisplatin, Leydig TM3 cells, autophagy

Procedia PDF Downloads 127
1083 IL6/PI3K/mTOR/GFAP Molecular Pathway Role in COVID-19-Induced Neurodegenerative Autophagy, Impacts and Relatives

Authors: Mohammadjavad Sotoudeheian

Abstract:

COVID-19, which began in December 2019, uses the angiotensin-converting enzyme 2 (ACE2) receptor to enter and spread through the cells. ACE2 mRNA is present in almost every organ, including nasopharynx, lung, as well as the brain. Ports of entry of SARS-CoV-2 into the central nervous system (CNS) may include arterial circulation, while viremia is remarkable. However, it is imperious to develop neurological symptoms evaluation CSF analysis in patients with COVID-19, but theoretically, ACE2 receptors are expressed in cerebellar cells and may be a target for SARS-CoV-2 infection in the brain. Recent evidence agrees that SARS-CoV-2 can impact the brain through direct and indirect injury. Two biomarkers for CNS injury, glial fibrillary acidic protein (GFAP) and neurofilament light chain (NFL) detected in the plasma of patients with COVID-19. NFL, an axonal protein expressed in neurons, is related to axonal neurodegeneration, and GFAP is over-expressed in CNS inflammation. GFAP cytoplasmic accumulation causes Schwan cells to misfunction, so affects myelin generation, reduces neuroskeletal support over NfLs during CNS inflammation, and leads to axonal degeneration. Interleukin-6 (IL-6), which extensively over-express due to interleukin storm during COVID-19 inflammation, regulates gene expression, as well as GFAP through STAT molecular pathway. IL-6 also impresses the phosphoinositide 3-kinase (PI3K)/STAT/smads pathway. The PI3K/ protein kinase B (Akt) pathway is the main modulator upstream of the mammalian target of rapamycin (mTOR), and alterations in this pathway are common in neurodegenerative diseases. Most neurodegenerative diseases show a disruption of autophagic function and display an abnormal increase in protein aggregation that promotes cellular death. Therefore, induction of autophagy has been recommended as a rational approach to help neurons clear abnormal protein aggregates and survive. The mTOR is a major regulator of the autophagic process and is regulated by cellular stressors. The mTORC1 pathway and mTORC2, as complementary and important elements in mTORC1 signaling, have become relevant in the regulation of the autophagic process and cellular survival through the extracellular signal-regulated kinase (ERK) pathway.

Keywords: mTORC1, COVID-19, PI3K, autophagy, neurodegeneration

Procedia PDF Downloads 85
1082 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 276
1081 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 247
1080 Study of Oxidative Processes in Blood Serum in Patients with Arterial Hypertension

Authors: Laura M. Hovsepyan, Gayane S. Ghazaryan, Hasmik V. Zanginyan

Abstract:

Hypertension (HD) is the most common cardiovascular pathology that causes disability and mortality in the working population. Most often, heart failure (HF), which is based on myocardial remodeling, leads to death in hypertension. Recently, endothelial dysfunction (EDF) or a violation of the functional state of the vascular endothelium has been assigned a significant role in the structural changes in the myocardium and the occurrence of heart failure in patients with hypertension. It has now been established that tissues affected by inflammation form increased amounts of superoxide radical and NO, which play a significant role in the development and pathogenesis of various pathologies. They mediate inflammation, modify proteins and damage nucleic acids. The aim of this work was to study the processes of oxidative modification of proteins (OMP) and the production of nitric oxide in hypertension. In the experimental work, the blood of 30 donors and 33 patients with hypertension was used. For the quantitative determination of OMP products, the based on the reaction of the interaction of oxidized amino acid residues of proteins and 2,4-dinitrophenylhydrazine (DNPH) with the formation of 2,4-dinitrophenylhydrazones, the amount of which was determined spectrophotometrically. The optical density of the formed carbonyl derivatives of dinitrophenylhydrazones was recorded at different wavelengths: 356 nm - aliphatic ketone dinitrophenylhydrazones (KDNPH) of neutral character; 370 nm - aliphatic aldehyde dinirophenylhydrazones (ADNPH) of neutral character; 430 nm - aliphatic KDNFG of the main character; 530 nm - basic aliphatic ADNPH. Nitric oxide was determined by photometry using Grace's solution. Adsorption was measured on a Thermo Scientific Evolution 201 SF at a wavelength of 546 nm. Thus, the results of the studies showed that in patients with arterial hypertension, an increased level of nitric oxide in the blood serum is observed, and there is also a tendency to an increase in the intensity of oxidative modification of proteins at a wavelength of 270 nm and 363 nm, which indicates a statistically significant increase in aliphatic aldehyde and ketone dinitrophenylhydrazones. The increase in the intensity of oxidative modification of blood plasma proteins in the studied patients, revealed by us, actually reflects the general direction of free radical processes and, in particular, the oxidation of proteins throughout the body. A decrease in the activity of the antioxidant system also leads to a violation of protein metabolism. The most important consequence of the oxidative modification of proteins is the inactivation of enzymes.

Keywords: hypertension (HD), oxidative modification of proteins (OMP), nitric oxide (NO), oxidative stress

Procedia PDF Downloads 107
1079 Fabrication of Al/Al2O3 Functionally Graded Composites via Centrifugal Method by Using a Polymeric Suspension

Authors: Majid Eslami

Abstract:

Functionally graded materials (FGMs) exhibit heterogeneous microstructures in which the composition and properties gently change in specified directions. The common type of FGMs consist of a metal in which ceramic particles are distributed with a graded concentration. There are many processing routes for FGMs. An important group of these methods is casting techniques (gravity or centrifugal). However, the main problem of casting molten metal slurry with dispersed ceramic particles is a destructive chemical reaction between these two phases which deteriorates the properties of the materials. In order to overcome this problem, in the present investigation a suspension of 6061 aluminum and alumina powders in a liquid polymer was used as the starting material and subjected to centrifugal force for making FGMs. The size rang of these powders was 45-63 and 106-125 μm. The volume percent of alumina in the Al/Al2O3 powder mixture was in the range of 5 to 20%. PMMA (Plexiglas) in different concentrations (20-50 g/lit) was dissolved in toluene and used as the suspension liquid. The glass mold contaning the suspension of Al/Al2O3 powders in the mentioned liquid was rotated at 1700 rpm for different times (4-40 min) while the arm length was kept constant (10 cm) for all the experiments. After curing the polymer, burning out the binder, cold pressing and sintering , cylindrical samples (φ=22 mm h=20 mm) were produced. The density of samples before and after sintering was quantified by Archimedes method. The results indicated that by using the same sized alumina and aluminum powders particles, FGM sample can be produced by rotation times exceeding 7 min. However, by using coarse alumina and fine alumina powders the sample exhibits step concentration. On the other hand, using fine alumina and coarse alumina results in a relatively uniform concentration of Al2O3 along the sample height. These results are attributed to the effects of size and density of different powders on the centrifugal force induced on the powders during rotation. The PMMA concentration and the vol.% of alumina in the suspension did not have any considerable effect on the distribution of alumina particles in the samples. The hardness profiles along the height of samples were affected by both the alumina vol.% and porosity content. The presence of alumina particles increased the hardness while increased porosity reduced the hardness. Therefore, the hardness values did not show the expected gradient in same sample. The sintering resulted in decreased porosity for all the samples investigated.

Keywords: FGM, powder metallurgy, centrifugal method, polymeric suspension

Procedia PDF Downloads 209
1078 Using a Phenomenological Approach to Explore the Experiences of Nursing Students in Coping with Their Emotional Responses in Caring for End-Of-Life Patients

Authors: Yun Chan Lee

Abstract:

Background: End-of-life care is a large area of all nursing practice and student nurses are likely to meet dying patients in many placement areas. It is therefore important to understand the emotional responses and coping strategies of student nurses in order for nursing education systems to have some appreciation of how nursing students might be supported in the future. Methodology: This research used a qualitative phenomenological approach. Six student nurses understanding a degree-level adult nursing course were interviewed. Their responses to questions were analyzed using interpretative phenomenological analysis. Finding: The findings identified 3 main themes. First, the common experience of ‘unpreparedness’. A very small number of participants felt that this was unavoidable and that ‘no preparation is possible’, the majority felt that they were unprepared because of ‘insufficient input’ from the university and as a result of wider ‘social taboos’ around death and dying. The second theme showed that emotions were affected by ‘the personal connection to the patient’ and the important sub-themes of ‘the evoking of memories’, ‘involvement in care’ and ‘sense of responsibility’. The third theme, the coping strategies used by students, seemed to fall into two broad areas those ‘internal’ with the student and those ‘external’. In terms of the internal coping strategies, ‘detachment’, ‘faith’, ‘rationalization’ and ‘reflective skills’ are the important components of this part. Regarding the external coping strategies, ‘clinical staff’ and ‘the importance of family and friends’ are the importance of accessing external forms of support. Implication: It is clear that student nurses are affected emotionally by caring for dying patients and many of them have apprehension even before they begin on their placements but very often this is unspoken. Those anxieties before the placement become more pronounced during and continue after the placements. This has implications for when support is offered and possibly its duration. Another significant point of the study is that participants often highlighted their wish to speak to qualified nurses after their experiences of being involved in end-of-life care and especially when they had been present at the time of death. Many of the students spoke that qualified nurses were not available to them. This seemed to be due to a number of reasons. Because the qualified nurses were not available, students had to make use of family members and friends to talk to. Consequently, the implication of this study is not only to educate student nurses but also to educate the qualified mentors on the importance of providing emotional support to students.

Keywords: nursing students, coping strategies, end-of-life care, emotional responses

Procedia PDF Downloads 161
1077 Reconceptualizing “Best Practices” in Public Sector

Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani

Abstract:

Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.

Keywords: benchmarking, action research, critical realism, best practices, public sector

Procedia PDF Downloads 127
1076 Forecasting Regional Data Using Spatial Vars

Authors: Taisiia Gorshkova

Abstract:

Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regions

Keywords: forecasting, regional data, spatial econometrics, vector autoregression

Procedia PDF Downloads 141