Search results for: optimal search
171 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 253170 Reworking of the Anomalies in the Discounted Utility Model as a Combination of Cognitive Bias and Decrease in Impatience: Decision Making in Relation to Bounded Rationality and Emotional Factors in Intertemporal Choices
Authors: Roberta Martino, Viviana Ventre
Abstract:
Every day we face choices whose consequences are deferred in time. These types of choices are the intertemporal choices and play an important role in the social, economic, and financial world. The Discounted Utility Model is the mathematical model of reference to calculate the utility of intertemporal prospects. The discount rate is the main element of the model as it describes how the individual perceives the indeterminacy of subsequent periods. Empirical evidence has shown a discrepancy between the behavior expected from the predictions of the model and the effective choices made from the decision makers. In particular, the term temporal inconsistency indicates those choices that do not remain optimal with the passage of time. This phenomenon has been described with hyperbolic models of the discount rate which, unlike the linear or exponential nature assumed by the discounted utility model, is not constant over time. This paper explores the problem of inconsistency by tracing the decision-making process through the concept of impatience. The degree of impatience and the degree of decrease of impatience are two parameters that allow to quantify the weight of emotional factors and cognitive limitations during the evaluation and selection of alternatives. In fact, although the theory assumes perfectly rational decision makers, behavioral finance and cognitive psychology have made it possible to understand that distortions in the decision-making process and emotional influence have an inevitable impact on the decision-making process. The degree to which impatience is diminished is the focus of the first part of the study. By comparing consistent and inconsistent preferences over time, it was possible to verify that some anomalies in the discounted utility model are a result of the combination of cognitive bias and emotional factors. In particular: the delay effect and the interval effect are compared through the concept of misperception of time; starting from psychological considerations, a criterion is proposed to identify the causes of the magnitude effect that considers the differences in outcomes rather than their ratio; the sign effect is analyzed by integrating in the evaluation of prospects with negative outcomes the psychological aspects of loss aversion provided by Prospect Theory. An experiment implemented confirms three findings: the greatest variation in the degree of decrease in impatience corresponds to shorter intervals close to the present; the greatest variation in the degree of impatience occurs for outcomes of lower magnitude; the variation in the degree of impatience is greatest for negative outcomes. The experimental phase was implemented with the construction of the hyperbolic factor through the administration of questionnaires constructed for each anomaly. This work formalizes the underlying causes of the discrepancy between the discounted utility model and the empirical evidence of preference reversal.Keywords: decreasing impatience, discount utility model, hyperbolic discount, hyperbolic factor, impatience
Procedia PDF Downloads 103169 Designing an Operational Control System for the Continuous Cycle of Industrial Technological Processes Using Fuzzy Logic
Authors: Teimuraz Manjapharashvili, Ketevani Manjaparashvili
Abstract:
Fuzzy logic is a modeling method for complex or ill-defined systems and is a relatively new mathematical approach. Its basis is to consider overlapping cases of parameter values and define operations to manipulate these cases. Fuzzy logic can successfully create operative automatic management or appropriate advisory systems. Fuzzy logic techniques in various operational control technologies have grown rapidly in the last few years. Fuzzy logic is used in many areas of human technological activity. In recent years, fuzzy logic has proven its great potential, especially in the automation of industrial process control, where it allows to form of a control design based on the experience of experts and the results of experiments. The engineering of chemical technological processes uses fuzzy logic in optimal management, and it is also used in process control, including the operational control of continuous cycle chemical industrial, technological processes, where special features appear due to the continuous cycle and correct management acquires special importance. This paper discusses how intelligent systems can be developed, in particular, how fuzzy logic can be used to build knowledge-based expert systems in chemical process engineering. The implemented projects reveal that the use of fuzzy logic in technological process control has already given us better solutions than standard control techniques. Fuzzy logic makes it possible to develop an advisory system for decision-making based on the historical experience of the managing operator and experienced experts. The present paper deals with operational control and management systems of continuous cycle chemical technological processes, including advisory systems. Because of the continuous cycle, many features are introduced in them compared to the operational control of other chemical technological processes. Among them, there is a greater risk of transitioning to emergency mode; the return from emergency mode to normal mode must be done very quickly due to the impossibility of stopping the technological process due to the release of defective products during this period (i.e., receiving a loss), accordingly, due to the need for high qualification of the operator managing the process, etc. For these reasons, operational control systems of continuous cycle chemical technological processes have been specifically discussed, as they are different systems. Special features of such systems in control and management were brought out, which determine the characteristics of the construction of control and management systems. To verify the findings, the development of an advisory decision-making information system for operational control of a lime kiln using fuzzy logic, based on the creation of a relevant expert-targeted knowledge base, was discussed. The control system has been implemented in a real lime production plant with a lime burn kiln, which has shown that suitable and intelligent automation improves operational management, reduces the risks of releasing defective products, and, therefore, reduces costs. The special advisory system was successfully used in the said plant both for the improvement of operational management and, if necessary, for the training of new operators due to the lack of an appropriate training institution.Keywords: chemical process control systems, continuous cycle industrial technological processes, fuzzy logic, lime kiln
Procedia PDF Downloads 28168 The Use of Cross-cultural Approaches (CCAs) in Psychotherapy in Addressing Mental Health Issues Amongst Women of Ethnic Minority
Authors: Adaku Thelma Olatise
Abstract:
Mental health disparities among women from diverse ethnic, cultural, and religious backgrounds remain a pressing concern, particularly as current psychotherapeutic models often fail to address the unique challenges these groups face. This is of particular concern since epidemiological studies across various countries and cultures consistently demonstrate higher prevalence rates of common mental disorders amongst these groups of women because of a lack of access to culturally oriented psychotherapeutic services. This literature review aims to examine how CCAs in psychotherapy can address the specific ethnic, cultural, and religious challenges women encounter in accessing mental health care. A search of relevant articles was conducted through PsycARTICLES and PubMed databases, using terms such as ‘mental health’, ‘women’, ‘culture’, and ‘ethnic minorities’. Supplementary searches on Google Scholar were also performed to capture literature not covered by traditional databases. While the importance of cross-cultural approaches in psychotherapy has become more apparent because people from diverse ethnic backgrounds inevitably perceive the world through different lenses, influencing their interpretations of human behavior and norms, there is a notable gap in the literature in understanding the influences of using of CCAs in psychotherapy amongst women of an ethnic minority. This gap not only reflects a poor understanding of the complex stressors faced by these women—such as familial, communal, and societal expectations—but also highlights the lack of support and culturally adapted interventions available to them. Even though scholars have posited that aligning treatment approaches with patients' cultural backgrounds is important to enhance therapeutic effectiveness, and the acknowledgment of culture is crucial in psychotherapy theory and practice. As well as the increasing global focus on psychotherapy applications that integrate non-Western practices, such as spiritual healing and community-based interventions, the adaptation of these approaches in mainstream mental health care has remained limited. This review found that the expectations and experiences of ethnic minority women were heavily influenced by family and community pressures. However, there were limited evidence-based, culturally oriented psychotherapeutic interventions tailored to ethnic minority women. This gap extends to inadequate representation of minority groups in clinical research, as well as a lack of culturally validated mental health outcome measures. Furthermore, studies have shown that psychotherapeutic models have largely been Western-oriented and Euro-centric because of socially constructed hierarchies. The origin of psychology from the Western world has predominantly reflected Western cultural traditions, shaped by historical, linguistic, and sociopolitical influences. These factors have led to a lack of recognition of therapeutic approaches from minority ethnic groups and the biases that emanate from hegemonic cultural beliefs and power dynamics influence the decisions about which psychotherapeutic modalities to integrate and practice. Therefore, this plethora of factors adds to the challenges women from ethnically and culturally diverse backgrounds face in accessing mental health services at the individual, familial, community, and societal levels. In conclusion, a cross-cultural approach is urgently needed within psychotherapy to address these challenges, ensuring that treatment frameworks are both culturally sensitive and gender responsive. Only by considering the lived experiences of minority women, particularly in relation to their cultural and religious contexts, can mental health services provide the appropriate care necessary to support their well-being.Keywords: mental health, women, culture, ethnicity
Procedia PDF Downloads 24167 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms
Authors: Hai L. Tran
Abstract:
When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.Keywords: data, narrative, number, anecdote, storytelling, news
Procedia PDF Downloads 79166 Peripheral Neuropathy after Locoregional Anesthesia
Authors: Dalila Chaid, Bennameur Fedilli, Mohammed Amine Bellelou
Abstract:
The study focuses on the experience of lower-limb amputees, who face both physical and psychological challenges due to their disability. Chronic neuropathic pain and various types of limb pain are common in these patients. They often require orthopaedic interventions for issues such as dressings, infection, ulceration, and bone-related problems. Research Aim: The aim of this study is to determine the most suitable anaesthetic technique for lower-limb amputees, which can provide them with the greatest comfort and prolonged analgesia. The study also aims to demonstrate the effectiveness and cost-effectiveness of ultrasound-guided local regional anaesthesia (LRA) in this patient population. Methodology: The study is an observational analytical study conducted over a period of eight years, from 2010 to 2018. It includes a total of 955 cases of revisions performed on lower limb stumps. The parameters analyzed in this study include the effectiveness of the block and the use of sedation, the duration of the block, the post-operative visual analog scale (VAS) scores, and patient comfort. Findings: The study findings highlight the benefits of ultrasound-guided LRA in providing comfort by optimizing post-operative analgesia, which can contribute to psychological and bodily repair in lower-limb amputees. Additionally, the study emphasizes the use of alpha2 agonist adjuvants with sedative and analgesic properties, long-acting local anaesthetics, and larger volumes for better outcomes. Theoretical Importance: This study contributes to the existing knowledge by emphasizing the importance of choosing an appropriate anaesthetic technique for lower-limb amputees. It highlights the potential of ultrasound-guided LRA and the use of specific adjuvants and local anaesthetics in improving post-operative analgesia and overall patient outcomes. Data Collection and Analysis Procedures: Data for this study were collected through the analysis of medical records and relevant documentation related to the 955 cases included in the study. The effectiveness of the anaesthetic technique, duration of the block, post-operative pain scores, and patient comfort were analyzed using statistical methods. Question Addressed: The study addresses the question of which anaesthetic technique would be most suitable for lower-limb amputees to provide them with optimal comfort and prolonged analgesia. Conclusion: The study concludes that ultrasound-guided LRA, along with the use of alpha2 agonist adjuvants, long-acting local anaesthetics, and larger volumes, can be an effective approach in providing comfort and improving post-operative analgesia for lower-limb amputees. This technique can potentially contribute to the psychological and bodily repair of these patients. The findings of this study have implications for clinical practice in the management of lower-limb amputees, highlighting the importance of personalized anaesthetic approaches for better outcomes.Keywords: neuropathic pain, ultrasound-guided peripheral nerve block, DN4 quiz, EMG
Procedia PDF Downloads 78165 The Role of Movement Quality after Osgood-Schlatter Disease in an Amateur Football Player: A Case Study
Authors: D. Pogliana, A. Maso, N. Milani, D. Panzin, S. Rivaroli, J. Konin
Abstract:
This case aims to identify the role of movement quality during the final stage of return to sport (RTS) in a male amateur football player 13 years old after passing the acute phase of the bilateral Osgood-Schlatter disease (OSD). The patient, after a year from passing the acute phase of OSD with the abstention of physical activity, reports bilateral anterior knee pain at the beginning of the football sport activity. Interventions: After the orthopedist check, who recommended physiotherapy sessions for the correction of motor patterns and the isometric reinforcement of the muscles of the quadriceps, the rehabilitation intervention was developed in 7 weeks through 14 sessions of neuro-motor training (NMT) with a frequency of two weekly sessions and six sessions of muscle-strengthening with a frequency of one weekly session. The sessions of NMT were carried out through free body exercises (or with overloads) with visual bio-feedback with the help of two cameras (one with anterior vision and one with lateral vision of the subject) and a big touch screen. The aim of these sessions of NMT was to modify the dysfunctional motor patterns evaluated by the 2D motion analysis test. The test was carried out at the beginning and at the end of the rehabilitation course and included five movements: single-leg squat (SLS), drop jump (DJ), single-leg hop (SLH), lateral shuffle (LS), and change of direction (COD). Each of these movements was evaluated through the video analysis of dynamic valgus knee, pelvic tilt, trunk control, shock absorption, and motor strategy. A free image analysis software (Kinovea) was then used to calculate scores. Results: Baseline assessment of the subject showed a total score of 59% on the right limb and 64% on the left limb (considering an optimal score above 85%) with large deficits in shock absorption capabilities, the presence of dynamic valgus knee, and dysfunctional motor strategies defined “quadriceps dominant.” After six weeks of training, the subject achieved a total score of 80% on the right limb and 86% on the left limb, with significant improvements in shock absorption capabilities, the presence of dynamic knee valgus, and the employment of more hip-oriented motor strategies on both lower limbs. The improvements shown in dynamic knee valgus, greater hip-oriented motor strategies, and improved shock absorption identified through six weeks of the NMT program can help a teenager amateur football player to manage the anterior knee pain during sports activity. In conclusion, NMT was a good choice to help a 13 years old male amateur football player to return to performance without pain after OSD and can also be used with all this type of athletes of the other teams' sports.Keywords: movement analysis, neuro-motor training, knee pain, movement strategies
Procedia PDF Downloads 133164 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model
Authors: Danjuma Bawa
Abstract:
This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics
Procedia PDF Downloads 147163 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer
Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu
Abstract:
Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer
Procedia PDF Downloads 60162 Evaluation of the Role of Advocacy and the Quality of Care in Reducing Health Inequalities for People with Autism, Intellectual and Developmental Disabilities at Sheffield Teaching Hospitals
Authors: Jonathan Sahu, Jill Aylott
Abstract:
Individuals with Autism, Intellectual and Developmental disabilities (AIDD) are one of the most vulnerable groups in society, hampered not only by their own limitations to understand and interact with the wider society, but also societal limitations in perception and understanding. Communication to express their needs and wishes is fundamental to enable such individuals to live and prosper in society. This research project was designed as an organisational case study, in a large secondary health care hospital within the National Health Service (NHS), to assess the quality of care provided to people with AIDD and to review the role of advocacy to reduce health inequalities in these individuals. Methods: The research methodology adopted was as an “insider researcher”. Data collection included both quantitative and qualitative data i.e. a mixed method approach. A semi-structured interview schedule was designed and used to obtain qualitative and quantitative primary data from a wide range of interdisciplinary frontline health care workers to assess their understanding and awareness of systems, processes and evidence based practice to offer a quality service to people with AIDD. Secondary data were obtained from sources within the organisation, in keeping with “Case Study” as a primary method, and organisational performance data were then compared against national benchmarking standards. Further data sources were accessed to help evaluate the effectiveness of different types of advocacy that were present in the organisation. This was gauged by measures of user and carer experience in the form of retrospective survey analysis, incidents and complaints. Results: Secondary data demonstrate near compliance of the Organisation with the current national benchmarking standard (Monitor Compliance Framework). However, primary data demonstrate poor knowledge of the Mental Capacity Act 2005, poor knowledge of organisational systems, processes and evidence based practice applied for people with AIDD. In addition there was poor knowledge and awareness of frontline health care workers of advocacy and advocacy schemes for this group. Conclusions: A significant amount of work needs to be undertaken to improve the quality of care delivered to individuals with AIDD. An operational strategy promoting the widespread dissemination of information may not be the best approach to deliver quality care and optimal patient experience and patient advocacy. In addition, a more robust set of standards, with appropriate metrics, needs to be developed to assess organisational performance which will stand the test of professional and public scrutiny.Keywords: advocacy, autism, health inequalities, intellectual developmental disabilities, quality of care
Procedia PDF Downloads 217161 Case Report of a Secretory Carcinoma of the Salivary Gland: Clinical Management Following High-Grade Transformation
Authors: Wissam Saliba, Mandy Nicholson
Abstract:
Secretory carcinoma (SC) is a rare type of salivary gland cancer. It was first realized as a distinct type of malignancy in 2010and wasinitially termed “mammary analogue secretory carcinoma” because of similarities with secretory breast cancer. The name was later changed to SC. Most SCs originate in parotid glands, and most harbour a rare gene mutation: ETV6-NTRK3. This mutation is rare in common cancers and common in rare cancers; it is present in most secretory carcinomas. Disease outcomes for SC are usually described as favourable as many cases of SC are lowgrade (LG), and cancer growth is slow. In early stages, localized therapy is usually indicated (surgery and/or radiation). Despitea favourable prognosis, a sub-set of casescan be much more aggressive.These cases tend to be of high-grade(HG).HG casesare associated with a poorer prognosis.Management of such cases can be challenging due to limited evidence for effective systemic therapy options. This case report describes the clinical management of a 46-year-oldmale patient with a unique case of SC. He was initially diagnosed with a low/intermediate grade carcinoma of the left parotid gland in 2009; he was treated with surgery and adjuvant radiation. Surgical pathology favoured primary salivary adenocarcinoma, and 2 lymph nodes were positive for malignancy. SC was not yet realized as a distinct type of cancerat the time of diagnosis, and the pathology reportvalidated this gap by stating that the specimen lacked features of the defined types of salivary carcinoma.Slow-growing pulmonary nodules were identified in 2017. In 2020, approximately 11 years after the initial diagnosis, the patient presented with malignant pleural effusion. Pathology from a pleural biopsy was consistent with metastatic poorly differentiated cancer of likely parotid origin, likely mammary analogue secretory carcinoma. The specimen was sent for Next Generation Sequencing (NGS); ETV6-NTRK3 gene fusion was confirmed, and systemic therapy was initiated.One cycle ofcarboplatin/paclitaxel was given in June 2020. He was switched to Larotrectinib (NTRK inhibitor (NTRKi)) later that month. Larotrectinib continued for approximately 9 months, with discontinuation in March 2021 due to disease progression. A second-generation NTRKi (Selitrectinib) was accessed and prescribedthrough a single patient study. Selitrectinib was well tolerated. The patient experienced a complete radiological response within~4 months. Disease progression occurred once again in October 2021. Progression was slow, and Selitrectinib continuedwhile the medical team performed a thorough search for additional treatment options. In January 2022, a liver lesion biopsy was performed, and NGS showed an NTRKG623R solvent-front resistance mutation. Various treatment pathways were considered. The patient pursuedanother investigational NTRKi through a clinical trial, and Selitrectinib was discontinued in July 2022. Excellent performance status was maintained throughout the entire course of treatment.It can be concluded that NTRK inhibitors provided satisfactory treatment efficacy and tolerance for this patient with high-grade transformation and NTRK gene fusion cancer. In the future, more clinical research is needed on systemic treatment options for high-grade transformations in NTRK gene fusion SCs.Keywords: secretory carcinoma, high-grade transformations, NTRK gene fusion, NTRK inhibitor
Procedia PDF Downloads 108160 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding
Authors: Ines Oliveira, Ana Reis
Abstract:
Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation
Procedia PDF Downloads 211159 Restless Leg Syndrome as the Presenting Symptom of Neuroendocrine Tumor
Authors: Mustafa Cam, Nedim Ongun, Ufuk Kutluana
Abstract:
Introduction: Restless LegsSyndrome (RLS) is a common, under-recognized disorder disrupts sleep and diminishes quality of life (1). The most common conditions highly associated with RLS include renalfailure, iron and folic acid deficiency, peripheral neuropathy, pregnancy, celiacdisease, Crohn’sdiseaseandrarelymalignancy (2).Despite a clear relation between low peripheral iron and increased prevalence and severity of RLS, the prevalence and clinical significance of RLS in iron-deficientanemic populations is unknown (2). We report here a case of RLS due to iron deficiency in the setting of neuroendocrinetumor. Report of Case: A 35 year-old man was referred to our clinic with general weakness, weight loss (10 kg in 2 months)and 2-month history of uncomfortable sensations in his legs with urge to move, partially relieved by movement. The symptoms were presented very day, worsening in the evening; the discomfort forced the patient to getup and walk around at night. RLS was severe, with a score of 22 at the International RLS ratingscale. The patient had no past medical history. The patient underwent a complete set of blood analyses and the following ab normal values were found (normal limitswithinbrackets): hemoglobin 9.9 g/dl (14-18), MCV 70 fL (80-94), ferritin 3,5 ng/mL (13-150). Brain and spinemagnetic resonance imaging was normal. The patient consultated with gastroenterology clinic and gastointestinal systemendoscopy was performed for theetiology of the iron deficiency anemia. After the gastricbiopsy, results allowed us to reach the diagnosis of neuroen docrine tumor and the patient referred to oncology clinic. Discussion: The first important consideration from this case report is that the patient was referred to our clinic because of his severe RLS symptoms dramatically reducing his quality of life. However, our clinical study clearly demonstrated that RLS was not the primary disease. Considering the information available for this patient, we believe that the most likely possibility is that RLS was secondary to iron deficiency, a very well-known and established cause of RLS in theliterature (3,4). Neuroendocrine tumors (NETs) are rare epithelial neoplasms with neuroendocrine differentiation that most commonly originate in the lungs and gastrointestinal tract (5). NETs vary widely in their clinical presentation; symptoms are often nonspecific and can be mistaken for those of other more common conditions (6). 50% of patients with reported disease stage have either regional or distant metastases at diagnosis (7). Accurate and earlier NET diagnosis is the first step in shortening the time to optimal care and improved outcomes for patients (8). The most important message from this case report is that RLS symptoms can sometimes be thesign of a life-threatening condition. Conclusion: Careful and complete collection of clinical and laboratory data should be carried out in RLS patients. Inparticular, if RLS onset coincides with weight loss and iron deficieny anemia, gastricendos copy should be performed. It is known about that malignancy is a rare etiology in RLS patients and to our knowledge; it is the first case with neuro endocrine tumor presenting with RLS.Keywords: neurology, neuroendocrine tumor, restless legs syndrome, sleep
Procedia PDF Downloads 285158 The Antioxidant Activity of Grape Chkhaveri and Its Wine Cultivated in West Georgia (Adjaria)
Authors: Maia Kharadze, Indira Djaparidze, Maia Vanidze, Aleko Kalandia
Abstract:
Modern scientific world studies chemical components and antioxidant activity of different kinds of vines according to their breed purity and location. To our knowledge, this kind of research has not been conducted in Georgia yet. The object of our research was to study Chkhaveri vine, which is included in the oldest varieties of the Black Sea basin vine. We have studied different-altitude Chkaveri grapes, juice, and wine (half dry rose-colored produced with European technologies) and their technical markers, qualitative and quantitive composition of their biologically active compounds and their antioxidant activity. We were determining the amount of phenols using Folin-Ciocalteu reagent, Flavonoids, Catechins and Anthocyanins using Spectral method and antioxidant activity using DPPH method. Several compounds were identified using –HPLC-UV-Vis, UPLC-MS methods. Six samples of Chkhaveri species– 5, 300, 360, 380, 400, 780 meter altitudes were taken and analyzed. The sample taken from 360 m altitude is distinguished by its cluster mass (383.6 grams) and high amount of sugar (20.1%). The sample taken from the five-meter altitude is distinguished by having high acidity (0.95%). Unlike other grapes varieties, such concentration of sugar and relatively low levels of citric acid ultimately leads to Chkhaveri wine individuality. Biologically active compounds of Chkhaveri were researched in 2014, 2015, 2016. The amount of total phenols in samples of 2016 fruit varies from 976.7 to 1767.0 mg/kg. Amount of Anthocians is 721.2-1630.2 mg/kg, and the amount of Flavanoids varies from 300.6 to 825.5 mg/kg. Relatively high amount of anthocyanins was found in the Chkhaveri at 780-meter altitude - 1630.2 mg/kg. Accordingly, the amount of Phenols and Flavanoids is high- 1767.9 mg/kg and 825.5 mg/kg. These characteristics are low in samples gathered from 5 meters above sea level, Anthocyanins-721.2 mg/ kg, total Phenols-976.7 mg/ kg, and Flavanoids-300.6 mg/kg. The highest amount of bioactive compounds can be found in the Chkhaveri samples of high altitudes because with rising height environment becomes harsh, the plant has to develop a better immune system using Phenolic compounds. The technology that is used for the production of wine also plays a huge role in the composition of the final product. Optimal techniques of maceration and ageing were worked out. While squeezing Chkhaveri, there are no anthocyanins in the juice. However, the amount of Anthocyanins rises during maceration. After the fermentation of dregs, the amount of anthocyanins is 55%, 521.3 gm/l, total Phenols 80% 1057.7 mg/l and Flavanoids 23.5 mg/l. Antioxidant activity of samples was also determined using the effect of 50% inhibition of the samples. All samples have high antioxidant activity. For instance, in samples at 780 meters above the sea-level antioxidant activity was 53.5%. It is relatively high compared to the sample at 5 m above sea-level with the antioxidant activity of 30.5%. Thus, there is a correlation between the amount Anthocyanins and antioxidant activity. The designated project has been fulfilled by financial support of the Georgia National Science Foundation (Grant AP/96/13, Grant 216816), Any idea in this publication is possessed by the author and may not represent the opinion of the Georgia National Science Foundation.Keywords: antioxidants, bioactive content, wine, chkhaveri
Procedia PDF Downloads 229157 Cement Matrix Obtained with Recycled Aggregates and Micro/Nanosilica Admixtures
Authors: C. Mazilu, D. P. Georgescu, A. Apostu, R. Deju
Abstract:
Cement mortars and concretes are some of the most used construction materials in the world, global cement production being expected to grow to approx. 5 billion tons, until 2030. But, cement is an energy intensive material, the cement industry being responsible for cca. 7% of the world's CO2 emissions. Also, natural aggregates represent non-renewable resources, exhaustible, which must be used efficiently. A way to reduce the negative impact on the environment is the use of additional hydraulically active materials, as a partial substitute for cement in mortars and concretes and/or the use of recycled concrete aggregates (RCA) for the recovery of construction waste, according to EU Directive 2018/851. One of the most effective active hydraulic admixtures is microsilica and more recently, with the technological development on a nanometric scale, nanosilica. Studies carried out in recent years have shown that the introduction of SiO2 nanoparticles into cement matrix improves the properties, even compared to microsilica. This is due to the very small size of the nanosilica particles (<100nm) and the very large specific surface, which helps to accelerate cement hydration and acts as a nucleating agent to generate even more calcium hydrosilicate which densifies and compacts the structure. The cementitious compositions containing recycled concrete aggregates (RCA) present, in generally, inferior properties compared to those obtained with natural aggregates. Depending on the degree of replacement of natural aggregate, decreases the workability of mortars and concretes with RAC, decrease mechanical resistances and increase drying shrinkage; all being determined, in particular, by the presence to the old mortar attached to the original aggregate from the RAC, which makes its porosity high and the mixture of components to require more water for preparation. The present study aims to use micro and nanosilica for increase the performance of some mortars and concretes obtained with RCA. The research focused on two types of cementitious systems: a special mortar composition used for encapsulating Low Level radioactive Waste (LLW); a composition of structural concrete, class C30/37, with the combination of exposure classes XC4+XF1 and settlement class S4. The mortar was made with 100% recycled aggregate, 0-5 mm sort and in the case of concrete, 30% recycled aggregate was used for 4-8 and 8-16 sorts, according to EN 206, Annex E. The recycled aggregate was obtained from a specially made concrete for this study, which after 28 days was crushed with the help of a Retsch jaw crusher and further separated by sieving on granulometric sorters. The partial replacement of cement was done progressively, in the case of the mortar composition, with microsilica (3, 6, 9, 12, 15% wt.), nanosilica (0.75, 1.5, 2.25% wt.), respectively mixtures of micro and nanosilica. The optimal combination of silica, from the point of view of mechanical resistance, was later used also in the case of the concrete composition. For the chosen cementitious compositions, the influence of micro and/or nanosilica on the properties in the fresh state (workability, rheological characteristics) and hardened state (mechanical resistance, water absorption, freeze-thaw resistance, etc.) is highlighted.Keywords: cement, recycled concrete aggregates, micro/nanosilica, durability
Procedia PDF Downloads 68156 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 263155 Autologous Blood for Conjunctival Autograft Fixation in Primary Pterygium Surgery: a Systematic Review and Meta-Analysis
Authors: Mohamed Abdelmongy
Abstract:
Autologous Blood for Conjunctival Autograft Fixation in Primary Pterygium Surgery: A Systematic Review and Meta-analysis Hossam Zein1,2, Ammar Ismail1,3, Mohamed Abdelmongy1,4, Sherif Elsherif1,5,6, Ahmad Hassanen1,4, Basma Muhammad2, Fathy Assaf1,3, Ahmed Elsehili1,7, Ahmed Negida1,7, Shin Yamane9, Mohamed M. Abdel-Daim8,9 and Kazuaki Kadonosono9 https://www.ncbi.nlm.nih.gov/pubmed/30277146 BACKGROUND: Pterygium is a benign ocular lesion characterized by triangular fibrovascular growth of conjunctival tissue over the cornea. Patients complain of the bad cosmetic appearance, ocular surface irritation and decreased visual acuity if the pterygium is large enough to cause astigmatism or encroach on the pupil. The definitive treatment of pterygium is surgical removal. However, outcomes are compromised by recurrence . The aim of the current study is to systematically review the current literature to explore the efficacy and safety of fibrin glue, suture and autologous blood coagulum for conjunctivalautograft fixation in primary pterygium surgery. OBJECTIVES: To assess the effectiveness of fibrin glue compared to sutures and autologous blood coagulum in conjunctival autografting for the surgical treatment of pterygium. METHODS: During preparing this manuscript, we followed the steps adequately illustrated in the Cochrane Handbook for Systematic Reviews of Interventions version 5.3, and reported it according to the preferred reporting of systematic review and meta-analysis (PRISMA) statement guidelines. We searched PubMed, Ovid (both through Medline), ISI Web of Science, and Cochrane Central Register of Controlled Trials (Central) through January 2017, using the following keywords “Pterygium AND (blood OR glue OR suture)” SELECTION CRITERIA: We included all randomized controlled trials (RCTs) that met the following criteria: 1) comparing autologous blood vs fibrin glue for conjunctivalautograft fixation in primary pterygium surgery 2) comparing autologous blood vs sutures for conjunctivalautograft fixation in primary pterygium surgery DATA COLLECTION AND ANALYSIS: Two review authors independently screened the search results, assessed trial quality, and extracted data using standard methodological procedures expected by Cochrane. The extracted data included A) study design, sample size, and main findings, B) Baseline characteristics of patients included in this review including their age, sex, pterygium site and grade, and graft size. C) Study outcomes comprising 1) primary outcomes: recurrence rate 2) secondary outcomes: graft stability outcomes (graft retraction, graft displacement), operation time (min) and postoperative symptoms (pain, discomfort, foreign body sensation, tearing) MAIN RESULTS: We included 7 RCTs and The review included662eyes (Blood: 293; Glue: 198; Suture: 171). we assess the 1) primary outcomes: recurrence rate 2) secondary outcomes: graft stability outcomes (graft retraction, graft displacement), operation time (min) and postoperative symptoms (pain, discomfort, foreign body sensation, tearing) CONCLUSIONS: Autologous blood for conjunctivalautograft fixation in pterygium surgery is associated with lower graft stability than fibrin glue or sutures. It was not inferior to fibrin glue or sutures regarding recurrence rate. The overall quality of evidence is low. Further well designed RCTs are needed to fully explore the efficacy of this new technique.Keywords: pterygium, autograft, ophthalmology, cornea
Procedia PDF Downloads 161154 Direct Contact Ultrasound Assisted Drying of Mango Slices
Authors: E. K. Mendez, N. A. Salazar, C. E. Orrego
Abstract:
There is undoubted proof that increasing the intake of fruit lessens the risk of hypertension, coronary heart disease, stroke, and probable evidence that lowers the risk of cancer. Proper fruit drying is an excellent alternative to make their shelf-life longer, commercialization easier, and ready-to-eat healthy products or ingredients. The conventional way of drying is by hot air forced convection. However, this process step often requires a very long residence time; furthermore, it is highly energy consuming and detrimental to the product quality. Nowadays, power ultrasound (US) technic has been considered as an emerging and promising technology for industrial food processing. Most of published works dealing with drying food assisted by US have studied the effect of ultrasonic pre-treatment prior to air-drying on food and the airborne US conditions during dehydration. In this work a new approach was tested taking in to account drying time and two quality parameters of mango slices dehydrated by convection assisted by 20 KHz power US applied directly using a holed plate as product support and sound transmitting surface. During the drying of mango (Mangifera indica L.) slices (ca. 6.5 g, 0.006 m height and 0.040 m diameter), their weight was recorded every hour until final moisture content (10.0±1.0 % wet basis) was reached. After previous tests, optimization of three drying parameters - frequencies (2, 5 and 8 minutes each half-hour), air temperature (50-55-60⁰C) and power (45-70-95W)- was attempted by using a Box–Behnken design under the response surface methodology for the optimal drying time, color parameters and rehydration rate of dried samples. Assays involved 17 experiments, including a quintuplicate of the central point. Dried samples with and without US application were packed in individual high barrier plastic bags under vacuum, and then stored in the dark at 8⁰C until their analysis. All drying assays and sample analysis were performed in triplicate. US drying experimental data were fitted with nine models, among which the Verna model resulted in the best fit with R2 > 0.9999 and reduced χ2 ≤ 0.000001. Significant reductions in drying time were observed for the assays that used lower frequency and high US power. At 55⁰C, 95 watts and 2 min/30 min of sonication, 10% moisture content was reached in 211 min, as compared with 320 min for the same test without the use of US (blank). Rehydration rates (RR), defined as the ratio of rehydrated sample weight to that of dry sample and measured, was also larger than those of blanks and, in general, the higher the US power, the greater the RR. The direct contact and intermittent US treatment of mango slices used in this work improve drying rates and dried fruit rehydration ability. This technique can thus be used to reduce energy processing costs and the greenhouse gas emissions of fruit dehydration.Keywords: ultrasonic assisted drying, fruit drying, mango slices, contact ultrasonic drying
Procedia PDF Downloads 345153 Storage of Organic Carbon in Chemical Fractions in Acid Soil as Influenced by Different Liming
Authors: Ieva Jokubauskaite, Alvyra Slepetiene, Danute Karcauskiene, Inga Liaudanskiene, Kristina Amaleviciute
Abstract:
Soil organic carbon (SOC) is the key soil quality and ecological stability indicator, therefore, carbon accumulation in stable forms not only supports and increases the organic matter content in the soil, but also has a positive effect on the quality of soil and the whole ecosystem. Soil liming is one of the most common ways to improve the carbon sequestration in the soil. Determination of the optimum intensity and combinations of liming in order to ensure the optimal carbon quantitative and qualitative parameters is one of the most important tasks of this work. The field experiments were carried out at the Vezaiciai Branch of Lithuanian Research Centre for Agriculture and Forestry (LRCAF) during the 2011–2013 period. The effect of liming with different intensity (at a rate 0.5 every 7 years and 2.0 every 3-4 years) was investigated in the topsoil of acid moraine loam Bathygleyic Dystric Glossic Retisol. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LRCAF. Soil samples for chemical analyses were taken from the topsoil after harvesting. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) at 590 nm wavelength using glucose standards. SOC fractional composition was determined by Ponomareva and Plotnikova version of classical Tyurin method. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR in water extract at soil-water ratio 1:5. Spectral properties (E4/E6 ratio) of humic acids were determined by measuring the absorbance of humic and fulvic acids solutions at 465 and 665 nm. Our study showed a negative statistically significant effect of periodical liming (at 0.5 and 2.0 liming rates) on SOC content in the soil. The content of SOC was 1.45% in the unlimed treatment, while in periodically limed at 2.0 liming rate every 3–4 years it was approximately by 0.18 percentage points lower. It was revealed that liming significantly decreased the DOC concentration in the soil. The lowest concentration of DOC (0.156 g kg-1) was established in the most intensively limed (2.0 liming rate every 3–4 years) treatment. Soil liming exerted an increase of all humic acids and fulvic acid bounded with calcium fractions content in the topsoil. Soil liming resulted in the accumulation of valuable humic acids. Due to the applied liming, the HR/FR ratio, indicating the quality of humus increased to 1.08 compared with that in unlimed soil (0.81). Intensive soil liming promoted the formation of humic acids in which groups of carboxylic and phenolic compounds predominated. These humic acids are characterized by a higher degree of condensation of aromatic compounds and in this way determine the intensive organic matter humification processes in the soil. The results of this research provide us with the clear information on the characteristics of SOC change, which could be very useful to guide the climate policy and sustainable soil management.Keywords: acid soil, carbon sequestration, long–term liming, soil organic carbon
Procedia PDF Downloads 229152 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks
Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci
Abstract:
The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization
Procedia PDF Downloads 129151 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions
Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino
Abstract:
Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.Keywords: absorption, carbon capture, partial oxy-combustion, solvent
Procedia PDF Downloads 190150 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI
Procedia PDF Downloads 171149 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment
Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen
Abstract:
Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time
Procedia PDF Downloads 77148 Description of Decision Inconsistency in Intertemporal Choices and Representation of Impatience as a Reflection of Irrationality: Consequences in the Field of Personalized Behavioral Finance
Authors: Roberta Martino, Viviana Ventre
Abstract:
Empirical evidence has, over time, confirmed that the behavior of individuals is inconsistent with the descriptions provided by the Discounted Utility Model, an essential reference for calculating the utility of intertemporal prospects. The model assumes that individuals calculate the utility of intertemporal prospectuses by adding up the values of all outcomes obtained by multiplying the cardinal utility of the outcome by the discount function estimated at the time the outcome is received. The trend of the discount function is crucial for the preferences of the decision maker because it represents the perception of the future, and its trend causes temporally consistent or temporally inconsistent preferences. In particular, because different formulations of the discount function lead to various conclusions in predicting choice, the descriptive ability of models with a hyperbolic trend is greater than linear or exponential models. Suboptimal choices from any time point of view are the consequence of this mechanism, the psychological factors of which are encapsulated in the discount rate trend. In addition, analyzing the decision-making process from a psychological perspective, there is an equivalence between the selection of dominated prospects and a degree of impatience that decreases over time. The first part of the paper describes and investigates the anomalies of the discounted utility model by relating the cognitive distortions of the decision-maker to the emotional factors that are generated during the evaluation and selection of alternatives. Specifically, by studying the degree to which impatience decreases, it’s possible to quantify how the psychological and emotional mechanisms of the decision-maker result in a lack of decision persistence. In addition, this description presents inconsistency as the consequence of an inconsistent attitude towards time-delayed choices. The second part of the paper presents an experimental phase in which we show the relationship between inconsistency and impatience in different contexts. Analysis of the degree to which impatience decreases confirms the influence of the decision maker's emotional impulses for each anomaly in the utility model discussed in the first part of the paper. This work provides an application in the field of personalized behavioral finance. Indeed, the numerous behavioral diversities, evident even in the degrees of decrease in impatience in the experimental phase, support the idea that optimal strategies may not satisfy individuals in the same way. With the aim of homogenizing the categories of investors and to provide a personalized approach to advice, the results proven in the experimental phase are used in a complementary way with the information in the field of behavioral finance to implement the Analytical Hierarchy Process model in intertemporal choices, useful for strategic personalization. In the construction of the Analytic Hierarchy Process, the degree of decrease in impatience is understood as reflecting irrationality in decision-making and is therefore used for the construction of weights between anomalies and behavioral traits.Keywords: analytic hierarchy process, behavioral finance, financial anomalies, impatience, time inconsistency
Procedia PDF Downloads 68147 The Role of a Specialized Diet for Management of Fibromyalgia Symptoms: A Systematic Review
Authors: Siddhant Yadav, Rylea Ranum, Hannah Alberts, Abdul Kalaiger, Brent Bauer, Ryan Hurt, Ann Vincent, Loren Toussaint, Sanjeev Nanda
Abstract:
Background and significance: Fibromyalgia (FM) is a chronic pain disorder also characterized by chronic fatigue, morning stiffness, sleep, and cognitive symptoms, psychological disturbances (anxiety, depression), and is comorbid with multiple medical and psychiatric conditions. It has an incidence of 2-4% in the general population and is reported more commonly in women. Oxidative stress and inflammation are thought to contribute to pain in patients with FM, and the adoption of an antioxidant/anti-inflammatory diet has been suggested as a modality to alleviate symptoms. The aim of this systematic review was to evaluate the efficacy of specialized diets (ketogenic, gluten free, Mediterranean, and low carbohydrate) in improving FM symptoms. Methodology: A comprehensive search of the following databases from inception to July 15th, 2021, was conducted: Ovid MEDLINE and Epub ahead of print, in-process and other non-indexed citations and daily, Ovid Embase, Ovid EBM reviews, Cochrane central register of controlled trials, EBSCO host CINAHL with full text, Elsevier Scopus, website and citation index, web of science emerging sources citation and clinicaltrials.gov. We included randomized controlled trials, non-randomized experimental studies, cross-sectional studies, cohort studies, case series, and case reports in adults with fibromyalgia. The risk of bias was assessed with the Agency for Health Care Research and Quality designed, specific recommended criteria (AHRQ). Results: Thirteen studies were eligible for inclusion. This included a total of 761 participants. Twelve out of the 13 studies reported improvement in widespread body pain, joint stiffness, sleeping pattern, mood, and gastrointestinal symptoms, and one study reported no changes in symptomatology in patients with FM on specialized diets. None of the studies showed the worsening of symptoms associated with a specific diet. Most of the patient population was female, with the mean age at which fibromyalgia was diagnosed being 48.12 years. Improvement in symptoms was reported by the patient's adhering to a gluten-free diet, raw vegan diet, tryptophan- and magnesium-enriched Mediterranean diet, aspartame- and msg- elimination diet, and specifically a Khorasan wheat diet. Risk of bias assessment noted that 6 studies had a low risk of bias (5 clinical trials and 1 case series), four studies had a moderate risk of bias, and 3 had a high risk of bias. In many of the studies, the allocation of treatment (diets) was not adequately concealed, and the researchers did not rule out any potential impact from a concurrent intervention or an unintended exposure that might have biased the results. On the other hand, there was a low risk of attrition bias in all the trials; all were conducted with an intention-to-treat, and the inclusion/exclusion criteria, exposures/interventions, and primary outcomes were valid, reliable, and implemented consistently across all study participants. Concluding statement: Patients with fibromyalgia who followed specialized diets experienced a variable degree of improvement in their widespread body pain. Improvement was also seen in stiffness, fatigue, moods, sleeping patterns, and gastrointestinal symptoms. Additionally, the majority of the patients also reported improvement in overall quality of life.Keywords: fibromyalgia, specialized diet, vegan, gluten free, Mediterranean, systematic review
Procedia PDF Downloads 73146 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 445145 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security
Authors: D. Pugazhenthi, B. Sree Vidya
Abstract:
Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification
Procedia PDF Downloads 259144 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 117143 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU
Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais
Abstract:
Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking
Procedia PDF Downloads 34142 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity
Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido
Abstract:
Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens
Procedia PDF Downloads 288