Search results for: sequential extraction process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16984

Search results for: sequential extraction process

874 De-Pigmentary Effect of Ayurvedic Treatment on Hyper-Pigmentation of Skin Due to Chloroquine: A Case Report

Authors: Sunil Kumar, Rajesh Sharma

Abstract:

Toxic epidermal necrolysis, pruritis, rashes, lichen planus like eruption, hyper pigmentation of skin are rare toxic effects of choloroquine used over a long time. Skin and mucus membrane hyper pigmentation is generally of a bluish black or grayish color and irreversible after discontinuation of the drug. According to Ayurveda, Dushivisha is the name given to any poisonous substance which is not fully endowed with the qualities of poison by nature (i.e. it acts as an impoverished or weak poison) and because of its mild potency, it remains in the body for many years causing various symptoms, one among them being discoloration of skin.The objective of this case report is to investigate the effect of Ayurvedic management of chloroquine induced hyper-pigmentation on the line of treatment of Dushivisha. Case Report: A 26-year-old female was suffering from hyper-pigmentation of the skin over the neck, forehead, temporo-mandibular joints, upper back and posterior aspect of both the arms since 8 years had history of taking Chloroquine came to Out Patient Department of National Institute of Ayurveda, Jaipur, India in Jan. 2015. The routine investigations (CBC, ESR, Eosinophil count) were within normal limits. Punch biopsy skin studied for histopathology under hematoxylin and eosin staining showed epidermis with hyper-pigmentation of the basal layer. In the papillary dermis as well as deep dermis there were scattered melanophages along with infiltration by mononuclear cells. There was no deposition of amyloid-like substances. These histopathological findings were suggestive of Chloroquine induced hyper-pigmentation. The case was treated on the line of treatment of Dushivisha and was given Vamana and Virechana (therapeutic emesis and purgation) every six months followed by Snehana karma (oleation therapy) with Panchatikta Ghrit and Swedana (sudation). Arogyavardhini Vati -1 g, Dushivishari Vati 500 mg, Mahamanjisthadi Quath 20 ml were given twelve hourly and Aragwadhadi Quath 25 ml at bed time orally. The patient started showing lightening of the pigments after six months and almost complete remission after 12 months of the treatment. Conclusion: This patient presented with the Dushivisha effect of Chloroquineandwas administered two relevant procedures from Panchakarma viz. Vamana and Virechana. Both Vamana and Virechanakarma here referred to Shodhana karma (purification procedures) eliminates accumulated toxins from the body. In this process, oleation dislodge the toxins from the tissues and sudation helps to bring them to the alimentary tract. The line of treatment did not target direct hypo pigmentary effects; rather aimed to eliminate the Dushivisha. This gave promising results in this condition.

Keywords: Ayurveda, chloroquine, Dushivisha, hyper-pigmentation

Procedia PDF Downloads 234
873 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 127
872 The Lighthouse Project: Recent Initiatives to Navigate Australian Families Safely Through Parental Separation

Authors: Kathryn McMillan

Abstract:

A recent study of 8500 adult Australians aged 16 and over revealed 62% had experienced childhood maltreatment. In response to multiple recommendations by bodies such as the Australian Law Reform Commission, parliamentary reports and stakeholder input, a number of key initiatives have been developed to grapple with the difficulties of a federal-state system and to screen and triage high-risk families navigating their way through the court system. The Lighthouse Project (LHP) is a world-first initiative of the Federal Circuit and Family Courts in Australia (FCFOCA) to screen family law litigants for major risk factors, including family violence, child abuse, alcohol or substance abuse and mental ill-health at the point of filing in all applications that seek parenting orders. It commenced on 7 December 2020 on a pilot basis but has now been expanded to 15 registries across the country. A specialist risk screen, Family DOORS, Triage has been developed – focused on improving the safety and wellbeing of families involved in the family law system safety planning and service referral, and ¬ differentiated case management based on risk level, with the Evatt List specifically designed to manage the highest risk cases. Early signs are that this approach is meeting the needs of families with multiple risks moving through the Court system. Before the LHP, there was no data available about the prevalence of risk factors experienced by litigants entering the family courts and it was often assumed that it was the litigation process that was fueling family violence and other risks such as suicidality. Data from the 2022 FCFCOA annual report indicated that in parenting proceedings, 70% alleged a child had been or was at risk of abuse, 80% alleged a party had experienced Family Violence, 74 % of children had been exposed to Family Violence, 53% alleged through substance misuse by party children had caused or was at risk of causing harm to children and 58% of matters allege mental health issues of a party had caused or placed a child at risk of harm. Those figures reveal the significant overlap between child protection and family violence, both of which are under the responsibility of state and territory governments. Since 2020, a further key initiative has been the co-location of child protection and police officials amongst a number of registries of the FCFOCA. The ability to access in a time-effective way details of family violence or child protection orders, weapons licenses, criminal convictions or proceedings is key to managing issues across the state and federal divide. It ensures a more cohesive and effective response to family law, family violence and child protection systems.

Keywords: child protection, family violence, parenting, risk screening, triage.

Procedia PDF Downloads 77
871 The Effects of Periostin in a Rat Model of Isoproterenol-Mediated Cardiotoxicity

Authors: Mahmut Sozmen, Alparslan Kadir Devrim, Yonca Betil Kabak, Tuba Devrim

Abstract:

Acute myocardial infarction is the leading cause of deaths in the worldwide. Mature cardiomyocytes do not have the ability to regenerate instead fibrous tissue proliferate and granulation tissue to fill out. Periostin is an extracellular matrix protein from fasciclin family and it plays an important role in the cell adhesion, migration, and growth of the organism. Periostin prevents apoptosis while stimulating cardiomyocytes. The main objective of this project is to investigate the effects of the recombinant murine periostin peptide administration for the cardiomyocyte regeneration in a rat model of acute myocardial infarction. The experiment was performed on 84 male rats (6 months old) in 4 group each contains 21 rats. Saline applied subcutaneously (1 ml/kg) two times with 24 hours intervals to the rats in control group (Group 1). Recombinant periostin peptide (1 μg/kg) dissolved in saline applied intraperitoneally in group 2 on 1, 3, 7, 14 and 21. days on same dates in group 4. Isoproterenol dissolved in saline applied intraperitoneally (85mg/kg/day) two times with 24 hours intervals to the groups 3 and 4. Rats in group 4 further received recombinant periostin peptide (1 μg/kg) dissolved in saline intraperitoneally starting one day after the final isoproterenol administration on days 1, 3, 7, 14 and 21. Following the final application of periostin rats continued to feed routinely with pelleted chow and water ad libitum for further seven days. At the end of 7th day rats sacrificed, blood and heart tissue samples collected for the immunohistochemical and biochemical analysis. Angiogenesis in response to tissue damage, is a highly dynamic process regulated by signals from the surrounding extracellular matrix and blood serum. In this project, VEGF, ANGPT, bFGF, TGFβ are the key factors that contribute to cardiomyocyte regeneration were investigated. Additionally, the relationship between mitosis and apoptosis (Bcl-2, Bax, PCNA, Ki-67, Phopho-Histone H3), cell cycle activators and inhibitors (Cyclin D1, D2, A2, Cdc2), the origin of regenerating cells (cKit and CD45) were examined. Present results revealed that periostin stimulated cardiomyocye cell-cycle re-entry in both normal and MCA damaged cardiomyocytes and increased angiogenesis. Thus, periostin contributes to cardiomyocyte regeneration during the healing period following myocardial infarction which provides a better understanding of its role of this mechanism, improving recovery rates and it is expected to contribute the lack of literature on this subject. Acknowledgement: This project was financially supported by Turkish Scientific Research Council- Agriculture, Forestry and Veterinary Research Support Group (TUBİTAK-TOVAG; Project No: 114O734), Ankara, TURKEY.

Keywords: cardiotoxicity, immunohistochemistry, isoproterenol, periostin

Procedia PDF Downloads 234
870 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 101
869 Reactivities of Turkish Lignites during Oxygen Enriched Combustion

Authors: Ozlem Uguz, Ali Demirci, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Lignitic coal holds its position as Turkey’s most important indigenous energy source to generate energy in thermal power plants. Hence, efficient and environmental-friendly use of lignite in electricity generation is of great importance. Thus, clean coal technologies have been planned to mitigate emissions and provide more efficient burning in power plants. In this context, oxygen enriched combustion (oxy-combustion) is regarded as one of the clean coal technologies, which based on burning with oxygen concentrations higher than that in air. As it is known that the most of the Turkish coals are low rank with high mineral matter content, unburnt carbon trapped in ash is, unfortunately, high, and it leads significant losses in the overall efficiencies of the thermal plants. Besides, the necessity of burning huge amounts of these low calorific value lignites to get the desired amount of energy also results in the formation of large amounts of ash that is rich in unburnt carbon. Oxygen enriched combustion technology enables to increase the burning efficiency through the complete burning of almost all of the carbon content of the fuel. This also contributes to the protection of air quality and emission levels drop reasonably. The aim of this study is to investigate the unburnt carbon content and the burning reactivities of several different lignite samples under oxygen enriched conditions. For this reason, the combined effects of temperature and oxygen/nitrogen ratios in the burning atmosphere were investigated and interpreted. To do this, Turkish lignite samples from Adıyaman-Gölbaşı and Kütahya-Tunçbilek regions were characterized first by proximate and ultimate analyses and the burning profiles were derived using DTA (Differential Thermal Analysis) curves. Then, these lignites were subjected to slow burning process in a horizontal tube furnace at different temperatures (200ºC, 400ºC, 600ºC for Adıyaman-Gölbaşı lignite and 200ºC, 450ºC, 800ºC for Kütahya-Tunçbilek lignite) under atmospheres having O₂+N₂ proportions of 21%O₂+79%N₂, 30%O₂+70%N₂, 40%O₂+60%N₂, and 50%O₂+50%N₂. These burning temperatures were specified based on the burning profiles derived from the DTA curves. The residues obtained from these burning tests were also analyzed by proximate and ultimate analyses to detect the unburnt carbon content along with the unused energy potential. Reactivity of these lignites was calculated using several methodologies. Burning yield under air condition (21%O₂+79%N₂) was used a benchmark value to compare the effectiveness of oxygen enriched conditions. It was concluded that oxygen enriched combustion method enhanced the combustion efficiency and lowered the unburnt carbon content of ash. Combustion of low-rank coals under oxygen enriched conditions was found to be a promising way to improve the efficiency of the lignite-firing energy systems. However, cost-benefit analysis should be considered for a better justification of this method since the use of more oxygen brings an unignorable additional cost.

Keywords: coal, energy, oxygen enriched combustion, reactivity

Procedia PDF Downloads 274
868 Tuberculosis (TB) and Lung Cancer

Authors: Asghar Arif

Abstract:

Lung cancer has been recognized as one of the greatest common cancers, causing the annual mortality rate of about 1.2 million people in the world. Lung cancer is the most prevalent cancer in men and the third-most common cancer among women (after breast and digestive cancers).Recent evidences have shown the inflammatory process as one of the potential factors of cancer. Tuberculosis (TB), pneumonia, and chronic bronchitis are among the most important inflammation-inducing factors in the lungs, among which TB has a more profound role in the emergence of cancer.TB is one of the important mortality factors throughout the world, and 205,000 death cases are reported annually due to this disease. Chronic inflammation and fibrosis due to TB can induce genetic mutation and alternations. Parenchyma tissue of lung is involved in both diseases of TB and lung cancer, and continuous cough in lung cancer, morphological vascular variations, lymphocytosis processes, and generation of immune system mediators such as interleukins, are all among the factors leading to the hypothesis regarding the role of TB in lung cancer Some reports have shown that the induction of necrosis and apoptosis or TB reactivation, especially in patients with immune-deficiency, may result in increasing IL-17 and TNF_α, which will either decrease P53 activity or increase the expression of Bcl-2, decrease Bax-T, and cause the inhibition of caspase-3 expression due to decreasing the expression of mitochondria cytochrome oxidase. It has been also indicated that following the injection of BCG vaccine, the host immune system will be reinforced, and in particular, the rates of gamma interferon, nitric oxide, and interleukin-2 are increased. Therefore, CD4 + lymphocyte function will be improved, and the person will be immune against cancer.Numerous prospective studies have so far been conducted on the role of TB in lung cancer, and it seems that this disease is effective in that particular cancer.One of the main challenges of lung cancer is its correct and timely diagnosis. Unfortunately, clinical symptoms (such as continuous cough, hemoptysis, weight loss, fever, chest pain, dyspnea, and loss of appetite) and radiological images are similar in TB and lung cancer. Therefore, anti-TB drugs are routinely prescribed for the patients in the countries with high prevalence of TB, like Pakistan. Regarding the similarity in clinical symptoms and radiological findings of lung cancer, proper diagnosis is necessary for TB and respiratory infections due to nontuberculousmycobacteria (NTM). Some of the drug resistive TB cases are, in fact, lung cancer or NTM lung infections. Acid-fast staining and histological study of phlegm and bronchial washing, culturing and polymerase chain reaction TB are among the most important solutions for differential diagnosis of these diseases. Briefly, it is assumed that TB is one of the risk factors for cancer. Numerous studies have been conducted in this regard throughout the world, and it has been observed that there is a significant relationship between previous TB infection and lung cancer. However, to prove this hypothesis, further and more extensive studies are required. In addition, as the clinical symptoms and radiological findings of TB, lung cancer, and non-TB mycobacteria lung infections are similar, they can be misdiagnosed as TB.

Keywords: TB and lung cancer, TB people, TB servivers, TB and HIV aids

Procedia PDF Downloads 73
867 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 281
866 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.

Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach

Procedia PDF Downloads 188
865 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction

Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca

Abstract:

Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.

Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes

Procedia PDF Downloads 119
864 Carbon Footprint of Educational Establishments: The Case of the University of Alicante

Authors: Maria R. Mula-Molina, Juan A. Ferriz-Papi

Abstract:

Environmental concerns are increasingly obtaining higher priority in sustainability agenda of educational establishments. This is important not only for its environmental performance in its own right as an organization, but also to present a model for its students. On the other hand, universities play an important role on research and innovative solutions for measuring, analyzing and reducing environmental impacts for different activities. The assessment and decision-making process during the activity of educational establishments is linked to the application of robust indicators. In this way, the carbon footprint is a developing indicator for sustainability that helps understand the direct impact on climate change. But it is not easy to implement. There is a large amount of considering factors involved that increases its complexity, such as different uses at the same time (research, lecturing, administration), different users (students, staff) or different levels of activity (lecturing, exam or holidays periods). The aim of this research is to develop a simplified methodology for calculating and comparing carbon emissions per user at university campus considering two main aspects for carbon accountings: Building operations and transport. Different methodologies applied in other Spanish university campuses are analyzed and compared to obtain a final proposal to be developed in this type of establishments. First, building operation calculation considers the different uses and energy sources consumed. Second, for transport calculation, the different users and working hours are calculated separately, as well as their origin and traveling preferences. For every transport, a different conversion factor is used depending on carbon emissions produced. The final result is obtained as an average of carbon emissions produced per user. A case study is applied to the University of Alicante campus in San Vicente del Raspeig (Spain), where the carbon footprint is calculated. While the building operation consumptions are known per building and month, it does not happen with transport. Only one survey about the habit of transport for users was developed in 2009/2010, so no evolution of results can be shown in this case. Besides, building operations are not split per use, as building services are not monitored separately. These results are analyzed in depth considering all factors and limitations. Besides, they are compared to other estimations in other campuses. Finally, the application of the presented methodology is also studied. The recommendations concluded in this study try to enhance carbon emission monitoring and control. A Carbon Action Plan is then a primary solution to be developed. On the other hand, the application developed in the University of Alicante campus cannot only further enhance the methodology itself, but also render the adoption by other educational establishments more readily possible and yet with a considerable degree of flexibility to cater for their specific requirements.

Keywords: building operations, built environment, carbon footprint, climate change, transport

Procedia PDF Downloads 295
863 Sculpted Forms and Sensitive Spaces: Walking through the Underground in Naples

Authors: Chiara Barone

Abstract:

In Naples, the visible architecture is only what emerges from the underground. Caves and tunnels cross it in every direction, intertwining with each other. They are not natural caves but spaces built by removing what is superfluous in order to dig a form out of the material. Architects, as sculptors of space, do not determine the exterior, what surrounds the volume and in which the forms live, but an interior underground space, perceptive and sensitive, able to generate new emotions each time. It is an intracorporeal architecture linked to the body, not in its external relationships, but rather with what happens inside. The proposed aims to reflect on the design of underground spaces in the Neapolitan city. The idea is to intend the underground as a spectacular museum of the city, an opportunity to learn in situ the history of the place along an unpredictable itinerary that crosses the caves and, in certain points, emerges, escaping from the world of shadows. Starting form the analysis and the study of the many overlapping elements, the archaeological one, the geological layer and the contemporary city above, it is possible to develop realistic alternatives for underground itineraries. The objective is to define minor paths to ensure the continuity between the touristic flows and entire underground segments already investigated but now disconnected: open-air paths, which abyss in the earth, retracing historical and preserved fragments. The visitor, in this way, passes from real spaces to sensitive spaces, in which the imaginary replaces the real experience, running towards exciting and secret knowledge. To safeguard the complex framework of the historical-artistic values, it is essential to use a multidisciplinary methodology based on a global approach. Moreover, it is essential to refer to similar design projects for the archaeological underground, capable of guide action strategies, looking at similar conditions in other cities, where the project has led to an enhancement of the heritage in the city. The research limits the field of investigation, by choosing the historic center of Naples, applying bibliographic and theoretical research to a real place. First of all, it’s necessary to deepen the places’ knowledge understanding the potentialities of the project as a link between what is below and what is above. Starting from a scientific approach, in which theory and practice are constantly intertwined through the architectural project, the major contribution is to provide possible alternative configurations for the underground space and its relationship with the city above, understanding how the condition of transition, as passage between the below and the above becomes structuring in the design process. Starting from the consideration of the underground as both a real physical place and a sensitive place, which engages the memory, imagination, and sensitivity of a man, the research aims at identifying possible configurations and actions useful for future urban programs to make the underground a central part of the lived city, again.

Keywords: underground paths, invisible ruins, imaginary, sculpted forms, sensitive spaces, Naples

Procedia PDF Downloads 103
862 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 283
861 A Comparative Study of Motion Events Encoding in English and Italian

Authors: Alfonsina Buoniconto

Abstract:

The aim of this study is to investigate the degree of cross-linguistic and intra-linguistic variation in the encoding of motion events (MEs) in English and Italian, these being typologically different languages both showing signs of disobedience to their respective types. As a matter of fact, the traditional typological classification of MEs encoding distributes languages into two macro-types, based on the preferred locus for the expression of Path, the main ME component (other components being Figure, Ground and Manner) characterized by conceptual and structural prominence. According to this model, Satellite-framed (SF) languages typically express Path information in verb-dependent items called satellites (e.g. preverbs and verb particles) with main verbs encoding Manner of motion; whereas Verb-framed languages (VF) tend to include Path information within the verbal locus, leaving Manner to adjuncts. Although this dichotomy is valid altogether, languages do not always behave according to their typical classification patterns. English, for example, is usually ascribed to the SF type due to the rich inventory of postverbal particles and phrasal verbs used to express spatial relations (i.e. the cat climbed down the tree); nevertheless, it is not uncommon to find constructions such as the fog descended slowly, which is typical of the VF type. Conversely, Italian is usually described as being VF (cf. Paolo uscì di corsa ‘Paolo went out running’), yet SF constructions like corse via in lacrime ‘She ran away in tears’ are also frequent. This paper will try to demonstrate that such a typological overlapping is due to the fact that the semantic units making up MEs are distributed within several loci of the sentence –not only verbs and satellites– thus determining a number of different constructions stemming from convergent factors. Indeed, the linguistic expression of motion events depends not only on the typological nature of languages in a traditional sense, but also on a series morphological, lexical, and syntactic resources, as well as on inferential, discursive, usage-related, and cultural factors that make semantic information more or less accessible, frequent, and easy to process. Hence, rather than describe English and Italian in dichotomic terms, this study focuses on the investigation of cross-linguistic and intra-linguistic variation in the use of all the strategies made available by each linguistic system to express motion. Evidence for these assumptions is provided by parallel corpora analysis. The sample texts are taken from two contemporary Italian novels and their respective English translations. The 400 motion occurrences selected (200 in English and 200 in Italian) were scanned according to the MODEG (an acronym for Motion Decoding Grid) methodology, which grants data comparability through the indexation and retrieval of combined morphosyntactic and semantic information at different levels of detail.

Keywords: construction typology, motion event encoding, parallel corpora, satellite-framed vs. verb-framed type

Procedia PDF Downloads 261
860 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 207
859 Gender and Asylum: A Critical Reassessment of the Case Law of the European Court of Human Right and of United States Courts Concerning Gender-Based Asylum Claims

Authors: Athanasia Petropoulou

Abstract:

While there is a common understanding that a person’s sex, gender, gender identity, and sexual orientation shape every stage of the migration experience, theories of international migration had until recently not been focused on exploring and incorporating a gender perspective in their analysis. In a similar vein, refugee law has long been the object of criticisms for failing to recognize and respond appropriately to women’s and sexual minorities’ experiences of persecution. The present analysis attempts to depict the challenges faced by the European Court of Human Rights (ECtHR) and U.S. courts when adjudicating in cases involving asylum claims with a gendered perspective. By providing a comparison between adjudicating strategies of international and national jurisdictions, the article aims to identify common or distinctive approaches in addressing gendered based claims. The paper argues that, despite the different nature of the judicial bodies and the different legal instruments applied respectively, judges face similar challenges in this context and often fail to qualify and address the gendered dimensions of asylum claims properly. The ECtHR plays a fundamental role in safeguarding human rights protection in Europe not only for European citizens but also for people fleeing violence, war, and dire living conditions. However, this role becomes more difficult to fulfill, not only because of the obvious institutional constraints but also because cases related to claims of asylum seekers concern a domain closely linked to State sovereignty. Amid the current “refugee crisis,” risk assessment performed by national authorities, like in the process of asylum determination, is shaped by wider geopolitical and economic considerations. The failure to recognize and duly address the gendered dimension of non - refoulement claims, one of the many shortcomings of these processes, is reflected in the decisions of the ECtHR. As regards U.S. case law, the study argues that U.S. courts either fail to apply any connection between asylum claims and their gendered dimension or tend to approach gendered based claims through the lens of the “political opinion” or “membership of a particular social group” reasons of fear of persecution. This exercise becomes even more difficult, taking into account that the U.S. asylum law inappropriately qualifies gendered-based claims. The paper calls for more sociologically informed decision-making practices and for a more contextualized and relational approach in the assessment of the risk of ill-treatment and persecution. Such an approach is essential for unearthing the gendered patterns of persecution and addressing effectively related claims, thus securing the human rights of asylum seekers.

Keywords: asylum, European court of human rights, gender, human rights, U.S. courts

Procedia PDF Downloads 108
858 Catalytic Alkylation of C2-C4 Hydrocarbons

Authors: Bolysbek Utelbayev, Tasmagambetova Aigerim, Toktasyn Raila, Markayev Yergali, Myrzakhanov Maxat

Abstract:

Intensive development of secondary processes of destructive processing of crude oil has led to the occurrence of oil refining factories resources of C2-C4 hydrocarbons. Except for oil gases also contain basically C2-C4 hydrocarbon gases where some of the amounts are burned. All these data has induced interest to the study of producing alkylate from hydrocarbons С2-С4 which being as components of motor fuels. The purpose of this work was studying transformation propane-propene, butane-butene fractions at the presence of the ruthenium-chromic support catalyst whereas the carrier is served pillar - structural montmorillonite containing in native bentonite clay. In this work is considered condition and structure of the bentonite clay from the South-Kazakhstan area of the Republic Kazakhstan. For preparation rhodium support catalyst (0,5-1,0 mass. % Rh) was used chloride of rhodium-RhCl3∙3H2O, as a carrier was used modified bentonite clay. For modifying natural clay to pillar structural form were used polyhydroxy complexes of chromium. To aqueous solution of chloride chromium gradually flowed the solution of sodium hydroxide at gradual hashing up to pH~3-4. The concentration of chloride chromium was paid off proceeding from calculation 5-30 mmole Cr3+ per gram clay. Suspension bentonite (~1,0 mass. %) received by intensive washing it in water during 4 h, pH-water extract of clay makes -8-9. The acidity of environment supervised by means of digital pH meter OP-208/1. In order to prevent coagulation of a solution polyhydroxy complexes of chromium, it was slowly added to a suspension of clay. "Reserve of basicity" Cr3+:/OH-allowing to prevent coagulation chloride of rhodium made 1/3. After endurance processed suspensions of clay during 24 h, a deposit was washed by water and condensed. The sample, after separate from a liquid phase, dried at first at the room temperature, and then at 110°C (2h) with the subsequent rise the temperature up to 180°C (4h). After cooling the firm mass was pounded to a powder, it was shifted infractions with the certain sizes of particles. Fractions of particles modifying clay in the further were impregnated with an aqueous solution with rhodium-RhCl3∙3H2O (0,5-1,0 mаss % Rh ). Obtained pillar structural bentonite approaches heat resistance and its porous structure above the 773K. Pillar structural bentonite was used for preparation 1.0% Ru/Carrier (modifying bentonite) support catalysts where is realised alkylation of C2-C4 hydrocarbons. The process of alkylation is carried out at a partial pressure of hydrogen 0.5-1.0MPa. Outcome 2.2.4 three methyl pentane and 2.2.3 trimethylpentane achieved 40%. At alkylation butane-butene mixture outcome of the isooctane is achieved 60%. In this condition of studying the ethene is not undergoing to alkylation.

Keywords: alkylation, butene, pillar structure, ruthenium catalyst

Procedia PDF Downloads 396
857 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 59
856 Nature of Cities: Ontological Dimension of the Urban

Authors: Ana Cristina García-Luna Romero

Abstract:

This document seeks to reflect on the urban project from its conceptual identity root. In the first instance, a proposal is made on how the city project is sustained from the conceptual root, from the logos: it opens a way to assimilate the imagination; what we imagine becomes a reality. In this way, firstly, the need to use language as a vehicle for transmitting the stories that sustain us as humanity can be deemed as an important social factor that enables us to social behavior. Secondly, the need to attend to the written language as a mechanism of power, as a means to consolidate a dominant ideology or a political position, is raised; as it served to carry out the modernization project, it is therefore addressed differences between the real and the literate city. Thus, the consolidated urban-architectural project is based on logos, the project, and planning. Considering the importance of materiality and its relation to subjective well-being contextualized from a socio-urban approach, we question ourselves into how we can look at something that is doubtful. From a philosophy perspective, the truth is considered to be nothing more than a matter of correspondence between the observer and the observed. To understand beyond the relative of the gaze, it is necessary to expose different perspectives since it depends on the understanding of what is observed and how it is critically analyzed. Therefore, the analysis of materiality, as a political field, takes a proposal based on this research in the principles in transgenesis: principle of communication, representativeness, security, health, malleability, availability of potentiality or development, conservation, sustainability, economy, harmony, stability, accessibility, justice, legibility, significance, consistency, joint responsibility, connectivity, beauty, among others. The (urban) human being acts because he wants to live in a certain way: in a community, in a fair way, with opportunity for development, with the possibility of managing the environment according to their needs, etc. In order to comply with this principle, it is necessary to design strategies from the principles in transgenesis, which must be named, defined, understood, and socialized by the urban being, the companies, and from themselves. In this way, the technical status of the city in the neoliberal present determines extraordinary conditions for reflecting on an almost emergency scenario created by the impact of cities that, far from being limited to resilient proposals, must aim at the reflection of the urban process that the present social model has generated. Therefore, can we rethink the paradigm of the perception of life quality in the current neoliberal model in the production of the character of public space related to the practices of being urban. What we are trying to do within this document is to build a framework to study under what logic the practices of the social system that make sense of the public space are developed, what the implications of the phenomena of the inscription of action and materialization (and its results over political action between the social and the technical system) are and finally, how we can improve the quality of life of individuals from the urban space.

Keywords: cities, nature, society, urban quality of life

Procedia PDF Downloads 124
855 Modified Graphene Oxide in Ceramic Composite

Authors: Natia Jalagonia, Jimsher Maisuradze, Karlo Barbakadze, Tinatin Kuchukhidze

Abstract:

At present intensive scientific researches of ceramics, cermets and metal alloys have been conducted for improving materials physical-mechanical characteristics. In purpose of increasing impact strength of ceramics based on alumina, simple method of graphene homogenization was developed. Homogeneous distribution of graphene (homogenization) in pressing composite became possible through the connection of functional groups of graphene oxide (-OH, -COOH, -O-O- and others) and alumina superficial OH groups with aluminum organic compounds. These two components connect with each other with -O-Al–O- bonds, and by their thermal treatment (300–500°C), graphene and alumina phase are transformed. Thus, choosing of aluminum organic compounds for modification is stipulated by the following opinion: aluminum organic compounds fragments fixed on graphene and alumina finally are transformed into an integral part of the matrix. By using of other elements as modifier on the matrix surface (Al2O3) other phases are transformed, which change sharply physical-mechanical properties of ceramic composites, for this reason, effect caused by the inclusion of graphene will be unknown. Fixing graphene fragments on alumina surface by alumoorganic compounds result in new type graphene-alumina complex, in which these two components are connected by C-O-Al bonds. Part of carbon atoms in graphene oxide are in sp3 hybrid state, so functional groups (-OH, -COOH) are located on both sides of graphene oxide layer. Aluminum organic compound reacts with graphene oxide at the room temperature, and modified graphene oxide is obtained: R2Al-O-[graphene]–COOAlR2. Remaining Al–C bonds also reacts rapidly with surface OH groups of alumina. In a result of these process, pressing powdery composite [Al2O3]-O-Al-O-[graphene]–COO–Al–O–[Al2O3] is obtained. For the purpose, graphene oxide suspension in dry toluene have added alumoorganic compound Al(iC4H9)3 in toluene with equimolecular ratio. Obtained suspension has put in the flask and removed solution in a rotary evaporate presence nitrogen atmosphere. Obtained powdery have been researched and used to consolidation of ceramic materials based on alumina. Ceramic composites are obtained in high temperature vacuum furnace with different temperature and pressure conditions. Received ceramics do not have open pores and their density reaches 99.5 % of TD. During the work, the following devices have been used: High temperature vacuum furnace OXY-GON Industries Inc (USA), device of spark-plasma synthesis, induction furnace, Electronic Scanning Microscopes Nikon Eclipse LV 150, Optical Microscope NMM-800TRF, Planetary mill Pulverisette 7 premium line, Shimadzu Dynamic Ultra Micro Hardness Tester DUH-211S, Analysette 12 Dynasizer and others.

Keywords: graphene oxide, alumo-organic, ceramic

Procedia PDF Downloads 308
854 The Participation of Graduates and Students of Social Work in the Erasmus Program: a Case Study in the Portuguese context – the Polytechnic of Leiria

Authors: Cezarina da Conceição Santinho Maurício, José Duque Vicente

Abstract:

Established in 1987, the Erasmus Programme is a program for the exchange of higher education students. Its purposes are several. The mobility developed has contributed to the promotion of multiple learning, the internalization the feeling of belonging to a community, and the consolidation of cooperation between entities or universities. It also allows the experience of a European experience, considering multilingualism one of the bases of the European project and vehicle to achieve the union in diversity. The program has progressed and introduced changes Erasmus+ currently offers a wide range of opportunities for higher education, vocational education and training, school education, adult education, youth, and sport. These opportunities are open to students and other stakeholders, such as teachers. Portugal was one of the countries that readily adhered to this program, assuming itself as an instrument of internationalization of polytechnic and university higher education. Students and social work teachers have been involved in this mobility of learning and multicultural interactions. The presence and activation of this program was made possible by Portugal's joining the European Union. This event was reflected in the field of portuguese social work and contributes to its approach to the reality of european social work. Historically, the Portuguese social work has built a close connection with the Latin American world and, in particular, with Brazil. There are several examples that can be identified in the different historical stages. This is the case of the post-revolution period of 1974 and the presence of the reconceptualization movement, the struggle for enrollment in the higher education circuit, the process of winning a bachelor's degree, and postgraduate training (the first doctorates of social work were carried out in Brazilian universities). This influence is also found in the scope of the authors and the theoretical references used. This study examines the participation of graduates and students of social work in the Erasmus program. The following specific goals were outlined: to identify the host countries and universities; to investigate the dimension and type of mobility made, understand the learning and experiences acquired, identify the difficulties felt, capture their perspectives on social work and the contribution of this experience in training. In the methodological field, the option fell on a qualitative methodology, with the application of semi-structured interviews to graduates and students of social work with Erasmus mobility experience. Once the graduates agreed, the interviews were recorded and transcribed, analyzed according to the previously defined analysis categories. The findings emphasize the importance of this experience for students and graduates in informal and formal learning. The authors conclude with recommendations to reinforce this mobility, either at the individual level or as a project built for the group or collective.

Keywords: erasmus programme, graduates and students of social work, participation, social work

Procedia PDF Downloads 149
853 Collaborative Management Approach for Logistics Flow Management of Cuban Medicine Supply Chain

Authors: Ana Julia Acevedo Urquiaga, Jose A. Acevedo Suarez, Ana Julia Urquiaga Rodriguez, Neyfe Sablon Cossio

Abstract:

Despite the progress made in logistics and supply chains fields, it is unavoidable the development of business models that use efficiently information to facilitate the integrated logistics flows management between partners. Collaborative management is an important tool for materializing the cooperation between companies, as a way to achieve the supply chain efficiency and effectiveness. The first face of this research was a comprehensive analysis of the collaborative planning on the Cuban companies. It is evident that they have difficulties in supply chains planning where production, supplies and replenishment planning are independent tasks, as well as logistics and distribution operations. Large inventories generate serious financial and organizational problems for entities, demanding increasing levels of working capital that cannot be financed. Problems were found in the efficient application of Information and Communication Technology on business management. The general objective of this work is to develop a methodology that allows the deployment of a planning and control system in a coordinated way on the medicine’s logistics system in Cuba. To achieve these objectives, several mechanisms of supply chain coordination, mathematical programming models, and other management techniques were analyzed to meet the requirements of collaborative logistics management in Cuba. One of the findings is the practical and theoretical inadequacies of the studied models to solve the current situation of the Cuban logistics systems management. To contribute to the tactical-operative management of logistics, the Collaborative Logistics Flow Management Model (CLFMM) is proposed as a tool for the balance of cycles, capacities, and inventories, always to meet the final customers’ demands in correspondence with the service level expected by these. The CLFMM has as center the supply chain planning and control system as a unique information system, which acts on the processes network. The development of the model is based on the empirical methods of analysis-synthesis and the study cases. Other finding is the demonstration of the use of a single information system to support the supply chain logistics management, allows determining the deadlines and quantities required in each process. This ensures that medications are always available to patients and there are no faults that put the population's health at risk. The simulation of planning and control with the CLFMM in medicines such as dipyrone and chlordiazepoxide, during 5 months of 2017, permitted to take measures to adjust the logistic flow, eliminate delayed processes and avoid shortages of the medicines studied. As a result, the logistics cycle efficiency can be increased to 91%, the inventory rotation would increase, and this results in a release of financial resources.

Keywords: collaborative management, medicine logistic system, supply chain planning, tactical-operative planning

Procedia PDF Downloads 176
852 Implementation of Ecological and Energy-Efficient Building Concepts

Authors: Robert Wimmer, Soeren Eikemeier, Michael Berger, Anita Preisler

Abstract:

A relatively large percentage of energy and resource consumption occurs in the building sector. This concerns the production of building materials, the construction of buildings and also the energy consumption during the use phase. Therefore, the overall objective of this EU LIFE project “LIFE Cycle Habitation” (LIFE13 ENV/AT/000741) is to demonstrate innovative building concepts that significantly reduce CO₂emissions, mitigate climate change and contain a minimum of grey energy over their entire life cycle. The project is being realised with the contribution of the LIFE financial instrument of the European Union. The ultimate goal is to design and build prototypes for carbon-neutral and “LIFE cycle”-oriented residential buildings and make energy-efficient settlements the standard of tomorrow in line with the EU 2020 objectives. To this end, a resource and energy-efficient building compound is being built in Böheimkirchen, Lower Austria, which includes 6 living units and a community area as well as 2 single family houses with a total usable floor surface of approximately 740 m². Different innovative straw bale construction types (load bearing and pre-fabricated non loadbearing modules) together with a highly innovative energy-supply system, which is based on the maximum use of thermal energy for thermal energy services, are going to be implemented. Therefore only renewable resources and alternative energies are used to generate thermal as well as electrical energy. This includes the use of solar energy for space heating, hot water and household appliances like dishwasher or washing machine, but also a cooking place for the community area operated with thermal oil as heat transfer medium on a higher temperature level. Solar collectors in combination with a biomass cogeneration unit and photovoltaic panels are used to provide thermal and electric energy for the living units according to the seasonal demand. The building concepts are optimised by support of dynamic simulations. A particular focus is on the production and use of modular prefabricated components and building parts made of regionally available, highly energy-efficient, CO₂-storing renewable materials like straw bales. The building components will be produced in collaboration by local SMEs that are organised in an efficient way. The whole building process and results are monitored and prepared for knowledge transfer and dissemination including a trial living in the residential units to test and monitor the energy supply system and to involve stakeholders into evaluation and dissemination of the applied technologies and building concepts. The realised building concepts should then be used as templates for a further modular extension of the settlement in a second phase.

Keywords: energy-efficiency, green architecture, renewable resources, sustainable building

Procedia PDF Downloads 149
851 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.

Keywords: HbA1C, T2DM, SBP, FBS

Procedia PDF Downloads 12
850 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 129
849 Sumac Sprouts: From in Vitro Seed Germination to Chemical Characterization

Authors: Leto Leandra, Guaitini Caterina, Agosti Anna, Del Vecchio Lorenzo, Guarrasi Valeria, Cirlini Martina, Chiancone Benedetta

Abstract:

To the best of our knowledge, this study represents the first attempt to investigate the in vitro germination response of Rhus coriaria L., and its sprout chemical characterization. Rhus coriaria L., a species belonging to the Anacardiaceae family, is commonly called "sumac" and is cultivated, in different countries of the Mediterranean and the Middle East regions, to produce a spice with a sour taste, obtained from its dried and ground fruits. Moreover, since ancient times, many beneficial properties have been attributed to this plant that has been used, in the traditional medicine of several Asian countries, against various diseases, including liver and intestinal pathologies, ulcers and various inflammatory states. In the recent past, sumac was cultivated in the Southern regions of Italy to treat leather, but its cultivation was abandoned, and currently, sumac plants grow spontaneously in marginal areas. Recently, in Italy, the interest in this species has been growing again, thanks to its numerous properties; thus, it becomes imperative to deepen the knowledge of this plant. In this study, in order to set up an efficient in vitro seed germination protocol, sumac seeds collected from spontaneous plants grown in Sicily, an island in the South of Italy, were, firstly, subjected to different treatments, scarification (mechanical, physical and chemical), cold stratification and imbibition, to break their physical and physiological dormancy, then, treated seeds were in vitro cultured on media with different gibberellic acid (GA3) concentrations. Results showed that, without any treatment, only 5% of in vitro sown seeds germinated, while the germination percentage increased up to 19% after the mechanical scarification. A further significative improvement of germination percentages was recorded after the physical scarification, with (40.5%) or without (36.5%) 8 weeks of cold stratification, especially when seeds were sown on gibberellin enriched cultured media. Vitro-derived sumac sprouts, at different developmental stages, were chemically characterized, in terms of polyphenol and tannin content, as well as for their antioxidant activity, to evaluate this matrix as a potential novel food or as a source of bioactive compounds. Results evidenced how more developed sumac sprouts and, above all, their leaves are a wealthy source of polyphenols (78.4 GAE/g SS) and tannins (21.9 mg GAE/g SS), with marked antioxidant activity. The outcomes of this study will be of support the nursery sector and sumac growers in obtaining a higher number of plants in a shorter time; moreover, the sprout chemical characterization will contribute to the process of considering this matrix as a new source of bioactive compounds and tannins to be used in food and non-food sectors.

Keywords: bioactive compounds, germination pre-treatments, rhus coriaria l., tissue culture

Procedia PDF Downloads 104
848 Sumac Sprouts: From in Vitro Seed Germination to Chemical Characterization

Authors: Leto Leandra, Guaitini Caterina, Agosti Anna, Del Vecchio Lorenzo, Guarrasi Valeria, Cirlini Martina, Chiancone Benedetta

Abstract:

To the best of our knowledge, this study represents the first attempt to investigate the in vitro germination response of Rhus coriaria L. and its sprout chemical characterization. Rhus coriaria L., a species belonging to the Anacardiaceae family, is commonly called "sumac” and is cultivated, in different countries of the Mediterranean and the Middle East regions, to produce a spice with a sour taste, obtained from its dried and ground fruits. Moreover, since ancient times, many beneficial properties have been attributed to this plant that has been used, in the traditional medicine of several Asian countries, against various diseases, including liver and intestinal pathologies, ulcers, and various inflammatory states. In the recent past, sumac was cultivated in the Southern regions of Italy to treat leather, but its cultivation was abandoned, and currently, sumac plants grow spontaneously in marginal areas. Recently, in Italy, the interest in this species has been growing again, thanks to its numerous properties; thus, it becomes imperative to deepen the knowledge of this plant. In this study, in order to set up an efficient in vitro seed germination protocol, sumac seeds collected from spontaneous plants grown in Sicily, an island in the South of Italy, were, firstly, subjected to different treatments, scarification (mechanical, physical and chemical), cold stratification and imbibition, to break their physical and physiological dormancy, then, treated seeds were in vitro cultured on media with different gibberellic acid (GA3) concentrations. Results showed that, without any treatment, only 5% of in vitro sown seeds germinated, while the germination percentage increased up to 19% after the mechanical scarification. A further significative improvement of germination percentages was recorded after the physical scarification, with (40.5%) or without (36.5%) 8 weeks of cold stratification, especially when seeds were sown on gibberellin enriched cultured media. Vitro-derived sumac sprouts, at different developmental stages, were chemically characterized, in terms of polyphenol and tannin content, as well as for their antioxidant activity, to evaluate this matrix as a potential novel food or as a source of bioactive compounds. Results evidenced how more developed sumac sprouts and, above all, their leaves are a wealthy source of polyphenols (78.4 GAE/g SS) and tannins (21.9 mg GAE/g SS), with marked antioxidant activity. The outcomes of this study will be of support the nursery sector and sumac growers in obtaining a higher number of plants in a shorter time; moreover, the sprout chemical characterization will contribute to the process of considering this matrix as a new source of bioactive compounds and tannins to be used in food and non-food sectors.

Keywords: bioactive compounds, germination pre-treatments, rhus coriaria l., tissue culture

Procedia PDF Downloads 101
847 Problem Solving in Mathematics Education: A Case Study of Nigerian Secondary School Mathematics Teachers’ Conceptions in Relation to Classroom Instruction

Authors: Carol Okigbo

Abstract:

Mathematical problem solving has long been accorded an important place in mathematics curricula at every education level in both advanced and emerging economies. Its classroom approaches have varied, such as teaching for problem-solving, teaching about problem-solving, and teaching mathematics through problem-solving. It requires engaging in tasks for which the solution methods are not eminent, making sense of problems and persevering in solving them by exhibiting processes, strategies, appropriate attitude, and adequate exposure. Teachers play important roles in helping students acquire competency in problem-solving; thus, they are expected to be good problem-solvers and have proper conceptions of problem-solving. Studies show that teachers’ conceptions influence their decisions about what to teach and how to teach. Therefore, how teachers view their roles in teaching problem-solving will depend on their pedagogical conceptions of problem-solving. If teaching problem-solving is a major component of secondary school mathematics instruction, as recommended by researchers and mathematics educators, then it is necessary to establish teachers’ conceptions, what they do, and how they approach problem-solving. This study is designed to determine secondary school teachers’ conceptions regarding mathematical problem solving, its current situation, how teachers’ conceptions relate to their demographics, as well as the interaction patterns in the mathematics classroom. There have been many studies of mathematics problem solving, some of which addressed teachers’ conceptions using single-method approaches, thereby presenting only limited views of this important phenomenon. To address the problem more holistically, this study adopted an integrated mixed methods approach which involved a quantitative survey, qualitative analysis of open-ended responses, and ethnographic observations of teachers in class. Data for the analysis came from a random sample of 327 secondary school mathematics teachers in two Nigerian states - Anambra State and Enugu State who completed a 45-item questionnaire. Ten of the items elicited demographic information, 11 items were open-ended questions, and 25 items were Likert-type questions. Of the 327 teachers who responded to the questionnaires, 37 were randomly selected and observed in their classes. Data analysis using ANOVA, t-tests, chi-square tests, and open coding showed that the teachers had different conceptions about problem-solving, which fall into three main themes: practice on exercises and word application problems, a process of solving mathematical problems, and a way of teaching mathematics. Teachers reported that no period is set aside for problem-solving; typically, teachers solve problems on the board, teach problem-solving strategies, and allow students time to struggle with problems on their own. The result shows a significant difference between male and female teachers’ conception of problems solving, a significant relationship among teachers’ conceptions and academic qualifications, and teachers who have spent ten years or more teaching mathematics were significantly different from the group with seven to nine years of experience in terms of their conceptions of problem-solving.

Keywords: conceptions, education, mathematics, problem solving, teacher

Procedia PDF Downloads 76
846 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus

Authors: Ngoc Nguyen

Abstract:

There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.

Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye

Procedia PDF Downloads 350
845 Probing Mechanical Mechanism of Three-Hinge Formation on a Growing Brain: A Numerical and Experimental Study

Authors: Mir Jalil Razavi, Tianming Liu, Xianqiao Wang

Abstract:

Cortical folding, characterized by convex gyri and concave sulci, has an intrinsic relationship to the brain’s functional organization. Understanding the mechanism of the brain’s convoluted patterns can provide useful clues into normal and pathological brain function. During the development, the cerebral cortex experiences a noticeable expansion in volume and surface area accompanied by tremendous tissue folding which may be attributed to many possible factors. Despite decades of endeavors, the fundamental mechanism and key regulators of this crucial process remain incompletely understood. Therefore, to taking even a small role in unraveling of brain folding mystery, we present a mechanical model to find mechanism of 3-hinges formation in a growing brain that it has not been addressed before. A 3-hinge is defined as a gyral region where three gyral crests (hinge-lines) join. The reasons that how and why brain prefers to develop 3-hinges have not been answered very well. Therefore, we offer a theoretical and computational explanation to mechanism of 3-hinges formation in a growing brain and validate it by experimental observations. In theoretical approach, the dynamic behavior of brain tissue is examined and described with the aid of a large strain and nonlinear constitutive model. Derived constitute model is used in the computational model to define material behavior. Since the theoretical approach cannot predict the evolution of cortical complex convolution after instability, non-linear finite element models are employed to study the 3-hinges formation and secondary morphological folds of the developing brain. Three-dimensional (3D) finite element analyses on a multi-layer soft tissue model which mimics a small piece of the brain are performed to investigate the fundamental mechanism of consistent hinge formation in the cortical folding. Results show that after certain amount growth of cortex, mechanical model starts to be unstable and then by formation of creases enters to a new configuration with lower strain energy. By further growth of the model, formed shallow creases start to form convoluted patterns and then develop 3-hinge patterns. Simulation results related to 3-hinges in models show good agreement with experimental observations from macaque, chimpanzee and human brain images. These results have great potential to reveal fundamental principles of brain architecture and to produce a unified theoretical framework that convincingly explains the intrinsic relationship between cortical folding and 3-hinges formation. This achieved fundamental understanding of the intrinsic relationship between cortical folding and 3-hinges formation would potentially shed new insights into the diagnosis of many brain disorders such as schizophrenia, autism, lissencephaly and polymicrogyria.

Keywords: brain, cortical folding, finite element, three hinge

Procedia PDF Downloads 236