Search results for: high/middle income trap
932 Renewable Energy and Hydrogen On-Site Generation for Drip Irrigation and Agricultural Machinery
Authors: Javier Carroquino, Nieves García-Casarejos, Pilar Gargallo, F. Javier García-Ramos
Abstract:
The energy used in agriculture is a source of global emissions of greenhouse gases. The two main types of this energy are electricity for pumping and diesel for agricultural machinery. In order to reduce these emissions, the European project LIFE REWIND addresses the supply of this demand from renewable sources. First of all, comprehensive data on energy demand and available renewable resources have been obtained in several case studies. Secondly, a set of simulations and optimizations have been performed, in search of the best configuration and sizing, both from an economic and emission reduction point of view. For this purpose, it was used software based on genetic algorithms. Thirdly, a prototype has been designed and installed, that it is being used for the validation in a real case. Finally, throughout a year of operation, various technical and economic parameters are being measured for further analysis. The prototype is not connected to the utility grid, avoiding the cost and environmental impact of a grid extension. The system includes three kinds of photovoltaic fields. One is located on a fixed structure on the terrain. Another one is floating on an irrigation raft. The last one is mounted on a two axis solar tracker. Each has its own solar inverter. The total amount of nominal power is 44 kW. A lead acid battery with 120 kWh of capacity carries out the energy storage. Three isolated inverters support a three phase, 400 V 50 Hz micro-grid, the same characteristics of the utility grid. An advanced control subsystem has been constructed, using free hardware and software. The electricity produced feeds a set of seven pumps used for purification, elevation and pressurization of water in a drip irrigation system located in a vineyard. Since the irrigation season does not include the whole year, as well as a small oversize of the generator, there is an amount of surplus energy. With this surplus, a hydrolyser produces on site hydrogen by electrolysis of water. An off-road vehicle with fuel cell feeds on that hydrogen and carries people in the vineyard. The only emission of the process is high purity water. On the one hand, the results show the technical and economic feasibility of stand-alone renewable energy systems to feed seasonal pumping. In this way, the economic costs, the environmental impacts and the landscape impacts of grid extensions are avoided. The use of diesel gensets and their associated emissions are also avoided. On the other hand, it is shown that it is possible to replace diesel in agricultural machinery, substituting it for electricity or hydrogen of 100% renewable origin and produced on the farm itself, without any external energy input. In addition, it is expected to obtain positive effects on the rural economy and employment, which will be quantified through interviews.Keywords: drip irrigation, greenhouse gases, hydrogen, renewable energy, vineyard
Procedia PDF Downloads 340931 The Importance of Dialogue, Self-Respect, and Cultural Etiquette in Multicultural Society: An Islamic and Secular Perspective
Authors: Julia A. Ermakova
Abstract:
In today's multicultural societies, dialogue, self-respect, and cultural etiquette play a vital role in fostering mutual respect and understanding. Whether viewed from an Islamic or secular perspective, the importance of these values cannot be overstated. Firstly, dialogue is essential in multicultural societies as it allows individuals from different cultural backgrounds to exchange ideas, opinions, and experiences. To engage in dialogue, one must be open and willing to listen, understand, and respect the views of others. This requires a level of self-awareness, where individuals must know themselves and their interlocutors to create a productive and respectful conversation. Secondly, self-respect is crucial for individuals living in multicultural societies (McLarney). One must have adequately high self-esteem and self-confidence to interact with others positively. By valuing oneself, individuals can create healthy relationships and foster mutual respect, which is essential in diverse communities. Thirdly, cultural etiquette is a way of demonstrating the beauty of one's culture by exhibiting good temperament (Al-Ghazali). Adab, a concept that encompasses good manners, praiseworthy words and deeds, and the pursuit of what is considered good, is highly valued in Islamic teachings. By adhering to Adab, individuals can guard against making mistakes and demonstrate respect for others. Islamic teachings provide etiquette for every situation in life, making up the way of life for Muslims. In the Islamic view, an elegant Muslim woman has several essential qualities, including cultural speech and erudition, speaking style, awareness of how to greet, the ability to receive compliments, lack of desire to argue, polite behavior, avoiding personal insults, and having good intentions (Al-Ghazali). The Quran highlights the inclination of people towards arguing, bickering, and disputes (Qur'an, 4:114). Therefore, it is imperative to avoid useless arguments and disputes, for they are poison that poisons our lives. The Prophet Muhammad, peace and blessings be upon him, warned that the most hateful person to Allah is an irreconcilable disputant (Al-Ghazali). By refraining from such behavior, individuals can foster respect and understanding in multicultural societies. From a secular perspective, respecting the views of others is crucial to engage in productive dialogue. The rule of argument emphasizes the importance of showing respect for the other person's views, allowing for the possibility of error on one's part, and avoiding telling someone they are wrong (Atamali). By exhibiting polite behavior and having respect for everyone, individuals can create a welcoming environment and avoid conflict. In conclusion, the importance of dialogue, self-respect, and cultural etiquette in multicultural societies cannot be overstated. By engaging in dialogue, respecting oneself and others, and adhering to cultural etiquette, individuals can foster mutual respect and understanding in diverse communities. Whether viewed from an Islamic or secular perspective, these values are essential for creating harmonious societies.Keywords: multiculturalism, self-respect, cultural etiquette, adab, ethics, secular perspective
Procedia PDF Downloads 87930 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 22929 Multicomponent Positive Psychology Intervention for Health Promotion of Retirees: A Feasibility Study
Authors: Helen Durgante, Mariana F. Sparremberger, Flavia C. Bernardes, Debora D. DellAglio
Abstract:
Health promotion programmes for retirees, based on Positive Psychology perspectives for the development of strengths and virtues, demand broadened empirical investigation in Brazil. In the case of evidence-based applied research, it is suggested feasibility studies are conducted prior to efficacy trials of the intervention, in order to identify and rectify possible faults in the design and implementation of the intervention. The aim of this study was to evaluate the feasibility of a multicomponent Positive Psychology programme for health promotion of retirees, based on Cognitive Behavioural Therapy and Positive Psychology perspectives. The programme structure included six weekly group sessions (two hours each) encompassing strengths such as Values and self-care, Optimism, Empathy, Gratitude, Forgiveness, and Meaning of life and work. The feasibility criteria evaluated were: Demand, Acceptability, Satisfaction with the programme and with the moderator, Comprehension/Generalization of contents, Evaluation of the moderator (Social Skills and Integrity/Fidelity), Adherence, and programme implementation. Overall, 11 retirees (F=11), age range 54-75, from the metropolitan region of Porto Alegre-RS-Brazil took part in the study. The instruments used were: Qualitative Admission Questionnaire; Moderator Field Diary; the Programme Evaluation Form to assess participants satisfaction with the programme and with the moderator (a six-item 4-point likert scale), and Comprehension/Generalization of contents (a three-item 4-point likert scale); Observers’ Evaluation Form to assess the moderator Social Skills (a five-item 4-point likert scale), Integrity/Fidelity (a 10 item 4-point likert scale), and Adherence (a nine-item 5-point likert scale). Qualitative data were analyzed using content analysis. Descriptive statistics as well as Intraclass Correlations coefficients were used for quantitative data and inter-rater reliability analysis. The results revealed high demand (N = 55 interested people) and acceptability (n = 10 concluded the programme with overall 88.3% frequency rate), satisfaction with the program and with the moderator (X = 3.76, SD = .34), and participants self-report of Comprehension/Generalization of contents provided in the programme (X = 2.82, SD = .51). In terms of the moderator Social Skills (X = 3.93; SD = .40; ICC = .752 [IC = .429-.919]), Integrity/Fidelity (X = 3.93; SD = .31; ICC = .936 [IC = .854-.981]), and participants Adherence (X = 4.90; SD = .29; ICC = .906 [IC = .783-.969]), evaluated by two independent observers present in each session of the programme, descriptive and Intraclass Correlation results were considered adequate. Structural changes were introduced in the intervention design and implementation methods, as well as the removal of items from questionnaires and evaluation forms. The obtained results were satisfactory, allowing changes to be made for further efficacy trials of the programme. Results are discussed taking cultural and contextual demands in Brazil into account.Keywords: feasibility study, health promotion, positive psychology intervention, programme evaluation, retirees
Procedia PDF Downloads 193928 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen
Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin
Abstract:
Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay
Procedia PDF Downloads 88927 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 183926 Human Dental Pulp Stem Cells Attenuate Streptozotocin-Induced Parotid Gland Injury in Rats
Authors: Gehan ElAkabawy
Abstract:
Background: Diabetes mellitus causes severe deteriorations of almost all the organs and systems of the body, as well as significant damage to the oral cavity. The oral changes are mainly related to salivary glands dysfunction characterized by hyposalivation and xerostomia, which significantly reduce diabetic patients’ quality of life. Human dental pulp stem cells represent a promising source for cell-based therapies, owing to their easy, minimally invasive surgical access, and high proliferative capacity. It was reported that the trophic support mediated by dental pulp stem cells can rescue the functional and structural alterations of damaged salivary glands. However, potential differentiation and paracrine effects of human dental pulp stem cells in diabetic-induced parotid gland damage have not been previously investigated. Our study aimed to investigate the therapeutic effects of intravenous transplantation of human dental pulp stem cells (hDPSCs) on parotid gland injury in a rat model of streptozotocin (STZ)-induced type 1 diabetes. Methods: Thirty Sprague-Dawley male rats were randomly categorised into three groups: control, diabetic (STZ), and transplanted (STZ+hDPSCs). hDPSCs or vehicle was injected into the tail vein 7 days after STZ injection. The fasting blood glucose levels were monitored weekly. A glucose tolerance test was performed, and the parotid gland weight, salivary flow rate, oxidative stress indices, parotid gland histology, and caspase-3, vascular endothelial growth factor (VEGF), and proliferating cell nuclear antigen (PCNA) expression in parotid tissues were assessed 28 days post-transplantation. Results: Transplantation of hDPSCs downregulated blood glucose, improved the salivary flow rate, and reduced oxidative stress. The cells migrated to, survived, and differentiated into acinar, ductal, and myoepithelial cells in the STZ-injured parotid gland. Moreover, they downregulated the expression of caspase-3 and upregulated the expression of VEGF and PCNA, likely exerting pro-angiogenetic and antiapoptotic effects and promoting endogenous regeneration. In addition, the transplanted cells enhanced the parotid nitric oxide (NO) -tetrahydrobiopterin (BH4) pathway. Conclusions: Our results show that hDPSCs can migrate to and survive within the STZ-injured parotid gland, where they prevent its functional and morphological damage by restoring normal glucose levels, differentiating into parotid cell populations, and stimulating paracrine-mediated regeneration. Thus, hDPSCs may have therapeutic potential in the treatment of diabetes-induced parotid gland injury.Keywords: dental pulp stem cells, diabetes, streptozotocin, parotid gland
Procedia PDF Downloads 194925 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends
Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman
Abstract:
Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment
Procedia PDF Downloads 338924 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 146923 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 165922 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 251921 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City
Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja
Abstract:
This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.Keywords: NIHL, noise, sound level meter, tinnitus
Procedia PDF Downloads 198920 The Expression of the Social Experience in Film Narration: Cinematic ‘Free Indirect Discourse’ in the Dancing Hawk (1977) by Grzegorz Krolikiewicz
Authors: Robert Birkholc
Abstract:
One of the basic issues related to the creation of characters in media, such as literature and film, is the representation of the characters' thoughts, emotions, and perceptions. This paper is devoted to the social perspective (or the focalization) expressed in film narration. The aim of the paper is to show how social point of view of the hero –conditioned by his origin and the environment from which he comes– can be created by using non-verbal, purely audiovisual means of expression. The issue will be considered on the example of the little-known polish movie The Dancing Hawk (1977) by Grzegorz Królikiewicz, based on the novel by Julian Kawalec. The thesis of the paper is that the polish director uses a narrative figure, which is somewhat analogous to literary form of free indirect discourse. In literature, free indirect discourse is formally ‘spoken’ by the external narrator, but the narration is clearly filtered through the language and thoughts of the character. According to some scholars (such as Roy Pascal), the narrator in this form of speech does not cite the character's words, but uses his way of thinking and imitates his perspective – sometimes with a deep irony. Free indirect discourse is frequently used in Julian Kawalec’s novel. Through the linguistic stylization, the author tries to convey the socially determined perspective of a peasant who migrates to the big city after the Second World War. Grzegorz Królikiewicz expresses the same social experience by pure cinematic form in the adaptation of the book. Both Kawalec and Królikiewicz show the consequences of so-called ‘social advancement’ in Poland after 1945, when the communist party took over political power. On the example of the fate of the main character, Michał Toporny, the director presents the experience of peasants who left their villages and had to adapt to new, urban space. However, the paper is not focused on the historical topic itself, but on the audiovisual form of the movie. Although Królikiewicz doesn’t use frequently POV shots, the narration of The Dancing Hawk is filtered through the sensations of the main character, who feels uprooted and alienated in the new social space. The director captures the hero's feelings through very complex audiovisual procedures – high or low points of view (representing the ‘social position’), grotesque soundtrack, expressionist scenery, and associative editing. In this way, he manages to create the world from the perspective of a socially maladjusted and internally split subject. The Dancing Hawk is a successful attempt to adapt the subjective narration of the book to the ‘language’ of the cinema. Mieke Bal’s notion of focalization helps to describe ‘free indirect discourse’ as a transmedial figure of representing of the characters’ perceptions. However, the polysemiotic medium of the film also significantly transforms this figure of representation. The paper shows both the similarities and differences between literary and cinematic ‘free indirect discourse.’Keywords: film and literature, free indirect discourse, social experience, subjective narration
Procedia PDF Downloads 130919 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment
Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov
Abstract:
Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity
Procedia PDF Downloads 186918 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications
Authors: Marina Shargorodsky
Abstract:
Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.Keywords: obesity, pregnancy, complications, weight gain
Procedia PDF Downloads 51917 Variability Studies of Seyfert Galaxies Using Sloan Digital Sky Survey and Wide-Field Infrared Survey Explorer Observations
Authors: Ayesha Anjum, Arbaz Basha
Abstract:
Active Galactic Nuclei (AGN) are the actively accreting centers of the galaxies that host supermassive black holes. AGN emits radiation in all wavelengths and also shows variability across all the wavelength bands. The analysis of flux variability tells us about the morphology of the site of emission radiation. Some of the major classifications of AGN are (a) Blazars, with featureless spectra. They are subclassified as BLLacertae objects, Flat Spectrum Radio Quasars (FSRQs), and others; (b) Seyferts with prominent emission line features are classified into Broad Line, Narrow Line Seyferts of Type 1 and Type 2 (c) quasars, and other types. Sloan Digital Sky Survey (SDSS) is an optical telescope based in Mexico that has observed and classified billions of objects based on automated photometric and spectroscopic methods. A sample of blazars is obtained from the third Fermi catalog. For variability analysis, we searched for light curves for these objects in Wide-Field Infrared Survey Explorer (WISE) and Near Earth Orbit WISE (NEOWISE) in two bands: W1 (3.4 microns) and W2 (4.6 microns), reducing the final sample to 256 objects. These objects are also classified into 155 BLLacs, 99 FSRQs, and 2 Narrow Line Seyferts, namely, PMNJ0948+0022 and PKS1502+036. Mid-infrared variability studies of these objects would be a contribution to the literature. With this as motivation, the present work is focused on studying a final sample of 256 objects in general and the Seyferts in particular. Owing to the fact that the classification is automated, SDSS has miclassified these objects into quasars, galaxies, and stars. Reasons for the misclassification are explained in this work. The variability analysis of these objects is done using the method of flux amplitude variability and excess variance. The sample consists of observations in both W1 and W2 bands. PMN J0948+0022 is observed between MJD from 57154.79 to 58810.57. PKS 1502+036 is observed between MJD from 57232.42 to 58517.11, which amounts to a period of over six years. The data is divided into different epochs spanning not more than 1.2 days. In all the epochs, the sources are found to be variable in both W1 and W2 bands. This confirms that the object is variable in mid-infrared wavebands in both long and short timescales. Also, the sources are observed for color variability. Objects either show a bluer when brighter trend (BWB) or a redder when brighter trend (RWB). The possible claim for the object to be BWB (present objects) is that the longer wavelength radiation emitted by the source can be suppressed by the high-energy radiation from the central source. Another result is that the smallest radius of the emission source is one day since the epoch span used in this work is one day. The mass of the black holes at the centers of these sources is found to be less than or equal to 108 solar masses, respectively.Keywords: active galaxies, variability, Seyfert galaxies, SDSS, WISE
Procedia PDF Downloads 128916 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology
Authors: Mahdi Farajzadeh Ajirlou
Abstract:
Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter
Procedia PDF Downloads 37915 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 106914 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars
Authors: Tobias Schramm, Günther Prokop
Abstract:
In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber
Procedia PDF Downloads 4913 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function
Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio
Abstract:
Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).Keywords: algorithm, diabetes, laboratory medicine, non-invasive
Procedia PDF Downloads 32912 The Hidden Mechanism beyond Ginger (Zingiber officinale Rosc.) Potent in vivo and in vitro Anti-Inflammatory Activity
Authors: Shahira M. Ezzat, Marwa I. Ezzat, Mona M. Okba, Esther T. Menze, Ashraf B. Abdel-Naim, Shahnas O. Mohamed
Abstract:
Background: In order to decrease the burden of the high cost of synthetic drugs, it is important to focus on phytopharmaceuticals. The aim of our study was to search for the mechanism of ginger (Zingiber officinale Roscoe) anti-inflammatory potential and to correlate it to its biophytochemicals. Methods: Various extracts viz. water, 50%, 70%, 80%, and 90% ethanol were prepared from ginger rhizomes. Fractionation of the aqueous extract (AE) was accomplished using Diaion HP-20. In vitro anti-inflammatory activity of the different extracts and isolated compounds was evaluated by protein denaturation inhibition, membrane stabilization, protease inhibition, and anti-lipoxygenase assays. In vivo anti-inflammatory activity of AE was estimated by assessment of rat paw oedema after carrageenan injection. Prostaglandin E2 (PGE2), certain inflammation markers (TNF-α, IL-6, IL-1α, IL-1β, INFr, MCP-1MIP, RANTES, and Nox) levels and MPO activity in the paw edema exudates were measured. Total antioxidant capacity (TAC) was also determined. Histopathological alterations of paw tissues were scored. Results: All the tested extracts showed significant (p < 0.1) anti-inflammatory activities. The highest percentage of heat induced albumin denaturation (66%) was exhibited by the 50% ethanol (250 μg/ml). The 70 and 90% ethanol extracts (500 μg/ml) were more potent as membrane stabilizers (34.5 and 37%, respectively) than diclofenac (33%). The 80 and 90% ethanol extracts (500 μg/ml) showed maximum protease inhibition (56%). The strongest anti-lipoxygenase activity was observed for the AE. It showed more significant lipoxygenase inhibition activity than that of diclofenac (58% and 52%, respectively) at the same concentration (125 μg/ml). Fractionation of AE yielded four main fractions (Fr I-IV) which showed significant in vitro anti-inflammatory. Purification of Fr-III and IV led to the isolation of 6-poradol (G1), 6-shogaol (G2); methyl 6- gingerol (G3), 5-gingerol (G4), 6-gingerol (G5), 8-gingerol (G6), 10-gingerol (G7), and 1-dehydro-6-gingerol (G8). G2 (62.5 ug/ml), G1 (250 ug/ml), and G8 (250 ug/ml) exhibited potent anti-inflammatory activity in all studied assays, while G4 and G5 exhibited moderate activity. In vivo administration of AE ameliorated rat paw oedema in a dose-dependent manner. AE (at 200 mg/kg) showed significant reduction (60%) of PGE2 production. The AE at different doses (at 25-200 mg/kg) showed significant reduction in inflammatory markers except for IL-1α. AE (at 25 mg/kg) is superior to indomethacin in reduction of IL-1β. Treatment of animals with the AE (100, 200 mg/kg) or indomethacin (10 mg/kg) showed significant reduction in TNF-α, IL-6, MCP-1, and RANTES levels, and MPO activity by about (31, 57 and 32% ) (65, 60 and 57%) (27, 41 and 28%) (23, 32 and 23%) (66, 67 and 67%) respectively. AE at 100 and 200 mg/kg was equipotent to indomethacin in reduction of NOₓ level and in increasing the TAC. Histopathological examination revealed very few inflammatory cells infiltration and oedema after administration of AE (200 mg/kg) prior to carrageenan. Conclusion: Ginger anti-inflammatory activity is mediated by inhibiting macrophage and neutrophils activation as well as negatively affecting monocyte and leukocyte migration. Moreover, it produced dose-dependent decrease in pro-inflammatory cytokines and chemokines and replenished the total antioxidant capacity. We strongly recommend future investigations of ginger in the potential signal transduction pathways.Keywords: anti-lipoxygenase activity, inflammatory markers, 1-dehydro-6-gingerol, 6-shogaol
Procedia PDF Downloads 250911 Effect of the Polymer Modification on the Cytocompatibility of Human and Rat Cells
Authors: N. Slepickova Kasalkova, P. Slepicka, L. Bacakova, V. Svorcik
Abstract:
Tissue engineering includes combination of materials and techniques used for the improvement, repair or replacement of the tissue. Scaffolds, permanent or temporally material, are used as support for the creation of the "new cell structures". For this important component (scaffold), a variety of materials can be used. The advantage of some polymeric materials is their cytocompatibility and possibility of biodegradation. Poly(L-lactic acid) (PLLA) is a biodegradable, semi-crystalline thermoplastic polymer. PLLA can be fully degraded into H2O and CO2. In this experiment, the effect of the surface modification of biodegradable polymer (performed by plasma treatment) on the various cell types was studied. The surface parameters and changes of the physicochemical properties of modified PLLA substrates were studied by different methods. Surface wettability was determined by goniometry, surface morphology and roughness study were performed with atomic force microscopy and chemical composition was determined using photoelectron spectroscopy. The physicochemical properties were studied in relation to cytocompatibility of human osteoblast (MG 63 cells), rat vascular smooth muscle cells (VSMC), and human stem cells (ASC) of the adipose tissue in vitro. A fluorescence microscopy was chosen to study and compare cell-material interaction. Important parameters of the cytocompatibility like adhesion, proliferation, viability, shape, spreading of the cells were evaluated. It was found that the modification leads to the change of the surface wettability depending on the time of modification. Short time of exposition (10-120 s) can reduce the wettability of the aged samples, exposition longer than 150 s causes to increase of contact angle of the aged PLLA. The surface morphology is significantly influenced by duration of modification, too. The plasma treatment involves the formation of the crystallites, whose number increases with increasing time of modification. On the basis of physicochemical properties evaluation, the cells were cultivated on the selected samples. Cell-material interactions are strongly affected by material chemical structure and surface morphology. It was proved that the plasma treatment of PLLA has a positive effect on the adhesion, spreading, homogeneity of distribution and viability of all cultivated cells. This effect was even more apparent for the VSMCs and ASCs which homogeneously covered almost the whole surface of the substrate after 7 days of cultivation. The viability of these cells was high (more than 98% for VSMCs, 89-96% for ASCs). This experiment is one part of the basic research, which aims to easily create scaffolds for tissue engineering with subsequent use of stem cells and their subsequent "reorientation" towards the bone cells or smooth muscle cells.Keywords: poly(L-lactic acid), plasma treatment, surface characterization, cytocompatibility, human osteoblast, rat vascular smooth muscle cells, human stem cells
Procedia PDF Downloads 227910 Spatial Pattern of Farm Mechanization: A Micro Level Study of Western Trans-Ghaghara Plain, India
Authors: Zafar Tabrez, Nizamuddin Khan
Abstract:
Agriculture in India in the pre-green revolution period was mostly controlled by terrain, climate and edaphic factors. But after the introduction of innovative factors and technological inputs, green revolution occurred and agricultural scene witnessed great change. In the development of India’s agriculture, speedy, and extensive introduction of technological change is one of the crucial factors. The technological change consists of adoption of farming techniques such as use of fertilisers, pesticides and fungicides, improved variety of seeds, modern agricultural implements, improved irrigation facilities, contour bunding for the conservation of moisture and soil, which are developed through research and calculated to bring about diversification and increase of production and greater economic return to the farmers. The green revolution in India took place during late 60s, equipped with technological inputs like high yielding varieties seeds, assured irrigation as well as modern machines and implements. Initially the revolution started in Punjab, Haryana and western Uttar Pradesh. With the efforts of government, agricultural planners, as well as policy makers, the modern technocratic agricultural development scheme was also implemented and introduced in backward and marginal regions of the country later on. Agriculture sector occupies the centre stage of India’s social security and overall economic welfare. The country has attained self-sufficiency in food grain production and also has sufficient buffer stock. Our first Prime Minister, Jawaharlal Nehru said ‘everything else can wait but not agriculture’. There is still a continuous change in the technological inputs and cropping patterns. Keeping these points in view, author attempts to investigate extensively the mechanization of agriculture and the change by selecting western Trans-Ghaghara plain as a case study and block a unit of the study. It includes the districts of Gonda, Balrampur, Bahraich and Shravasti which incorporate 44 blocks. It is based on secondary sources of data by blocks for the year 1997 and 2007. It may be observed that there is a wide range of variations and the change in farm mechanization, i.e., agricultural machineries such as ploughs, wooden and iron, advanced harrow and cultivator, advanced thrasher machine, sprayers, advanced sowing instrument, and tractors etc. It may be further noted that due to continuous decline in size of land holdings and outflux of people for the same nature of works or to be employed in non-agricultural sectors, the magnitude and direction of agricultural systems are affected in the study area which is one of the marginalized regions of Uttar Pradesh, India.Keywords: agriculture, technological inputs, farm mechanization, food production, cropping pattern
Procedia PDF Downloads 311909 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process
Authors: Stehr Melanie
Abstract:
The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.Keywords: B2B buying process, crisis, economics of information theory, information channel
Procedia PDF Downloads 183908 Exploring the Energy Saving Benefits of Solar Power and Hot Water Systems: A Case Study of a Hospital in Central Taiwan
Authors: Ming-Chan Chung, Wen-Ming Huang, Yi-Chu Liu, Li-Hui Yang, Ming-Jyh Chen
Abstract:
introduction: Hospital buildings require considerable energy, including air conditioning, lighting, elevators, heating, and medical equipment. Energy consumption in hospitals is expected to increase significantly due to innovative equipment and continuous development plans. Consequently, the environment and climate will be adversely affected. Hospitals should therefore consider transforming from their traditional role of saving lives to being at the forefront of global efforts to reduce carbon dioxide emissions. As healthcare providers, it is our responsibility to provide a high-quality environment while using as little energy as possible. Purpose / Methods: Compare the energy-saving benefits of solar photovoltaic systems and solar hot water systems. The proportion of electricity consumption effectively reduced after the installation of solar photovoltaic systems. To comprehensively assess the potential benefits of utilizing solar energy for both photovoltaic (PV) and solar thermal applications in hospitals, a solar PV system was installed covering a total area of 28.95 square meters in 2021. Approval was obtained from the Taiwan Power Company to integrate the system into the hospital's electrical infrastructure for self-use. To measure the performance of the system, a dedicated meter was installed to track monthly power generation, which was then converted into area output using an electric energy conversion factor. This research aims to compare the energy efficiency of solar PV systems and solar thermal systems. Results: Using the conversion formula between electrical and thermal energy, we can compare the energy output of solar heating systems and solar photovoltaic systems. The comparative study draws upon data from February 2021 to February 2023, wherein the solar heating system generated an average of 2.54 kWh of energy per panel per day, while the solar photovoltaic system produced 1.17 kWh of energy per panel per day, resulting in a difference of approximately 2.17 times between the two systems. Conclusions: After conducting statistical analysis and comparisons, it was found that solar thermal heating systems offer higher energy and greater benefits than solar photovoltaic systems. Furthermore, an examination of literature data and simulations of the energy and economic benefits of solar thermal water systems and solar-assisted heat pump systems revealed that solar thermal water systems have higher energy density values, shorter recovery periods, and lower power consumption than solar-assisted heat pump systems. Through monitoring and empirical research in this study, it has been concluded that a heat pump-assisted solar thermal water system represents a relatively superior energy-saving and carbon-reducing solution for medical institutions. Not only can this system help reduce overall electricity consumption and the use of fossil fuels, but it can also provide more effective heating solutions.Keywords: sustainable development, energy conservation, carbon reduction, renewable energy, heat pump system
Procedia PDF Downloads 80907 Role of Empirical Evidence in Law-Making: Case Study from India
Authors: Kaushiki Sanyal, Rajesh Chakrabarti
Abstract:
In India, on average, about 60 Bills are passed every year in both Houses of Parliament – Lok Sabha and Rajya Sabha (calculated from information on websites of both Houses). These are debated in both Lok Sabha (House of Commons) and Rajya Sabha (Council of States) before they are passed. However, lawmakers rarely use empirical evidence to make a case for a law. Most of the time, they support a law on the basis of anecdote, intuition, and common sense. While these do play a role in law-making, without the necessary empirical evidence, laws often fail to achieve their desired results. The quality of legislative debates is an indicator of the efficacy of the legislative process through which a Bill is enacted. However, the study of legislative debates has not received much attention either in India or worldwide due to the difficulty of objectively measuring the quality of a debate. Broadly, three approaches have emerged in the study of legislative debates. The rational-choice or formal approach shows that speeches vary based on different institutional arrangements, intra-party politics, and the political culture of a country. The discourse approach focuses on the underlying rules and conventions and how they impact the content of the debates. The deliberative approach posits that legislative speech can be reasoned, respectful, and informed. This paper aims to (a) develop a framework to judge the quality of debates by using the deliberative approach; (b) examine the legislative debates of three Bills passed in different periods as a demonstration of the framework, and (c) examine the broader structural issues that disincentive MPs from scrutinizing Bills. The framework would include qualitative and quantitative indicators to judge a debate. The idea is that the framework would provide useful insights into the legislators’ knowledge of the subject, the depth of their scrutiny of Bills, and their inclination toward evidence-based research. The three Bills that the paper plans to examine are as follows: 1. The Narcotics Drugs and Psychotropic Substances Act, 1985: This act was passed to curb drug trafficking and abuse. However, it mostly failed to fulfill its purpose. Consequently, it was amended thrice but without much impact on the ground. 2. The Criminal Laws (Amendment) Act, 2013: This act amended the Indian Penal Code to add a section on human trafficking. The purpose was to curb trafficking and penalise traffickers, pimps, and middlemen. However, the crime rate remains high while the conviction rate is low. 3. The Surrogacy (Regulation) Act, 2021: This act bans commercial surrogacy allowing only relatives to act as surrogates as long as there is no monetary payment. Experts fear that instead of preventing commercial surrogacy, it would drive the activity underground. The consequences would be borne by the surrogate, who would not be protected by law. The purpose of the paper is to objectively analyse the quality of parliamentary debates, get insights into how MPs understand the evidence and deliberate on steps to incentivise them to use empirical evidence.Keywords: legislature, debates, empirical, India
Procedia PDF Downloads 85906 Digital Technology Relevance in Archival and Digitising Practices in the Republic of South Africa
Authors: Tashinga Matindike
Abstract:
By means of definition, digital artworks encompass an array of artistic productions that are expressed in a technological form as an essential part of a creative process. Examples include illustrations, photos, videos, sculptures, and installations. Within the context of the visual arts, the process of repatriation involves the return of once-appropriated goods. Archiving denotes the preservation of a commodity for storage purposes in order to nurture its continuity. The aforementioned definitions form the foundation of the academic framework and premise of the argument, which is outlined in this paper. This paper aims to define, discuss and decipher the complexities involved in digitising artworks, whilst explaining the benefits of the process, particularly within the South African context, which is rich in tangible and intangible traditional cultural material, objects, and performances. With the internet having been introduced to the African Continent in the early 1990s, this new form of technology, in its own right, initiated a high degree of efficiency, which also resulted in the progressive transformation of computer-generated visual output. Subsequently, this caused a revolutionary influence on the manner in which technological software was developed and uterlised in art-making. Digital technology and the digitisation of creative processes then opened up new avenues of collating and recording information. One of the first visual artists to make use of digital technology software in his creative productions was United States-based artist John Whitney. His inventive work contributed greatly to the onset and development of digital animation. Comparable by technique and originality, South African contemporary visual artists who make digital artworks, both locally and internationally, include David Goldblatt, Katherine Bull, Fritha Langerman, David Masoga, Zinhle Sethebe, Alicia Mcfadzean, Ivan Van Der Walt, Siobhan Twomey, and Fhatuwani Mukheli. In conclusion, the main objective of this paper is to address the following questions: In which ways has the South African art community of visual artists made use of and benefited from technology, in its digital form, as a means to further advance creativity? What are the positive changes that have resulted in art production in South Africa since the onset and use of digital technological software? How has digitisation changed the manner in which we record, interpret, and archive both written and visual information? What is the role of South African art institutions in the development of digital technology and its use in the field of visual art. What role does digitisation play in the process of the repatriation of artworks and artefacts. The methodology in terms of the research process of this paper takes on a multifacted form, inclusive of data analysis of information attained by means of qualitative and quantitative approaches.Keywords: digital art, digitisation, technology, archiving, transformation and repatriation
Procedia PDF Downloads 50905 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis
Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov
Abstract:
Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage
Procedia PDF Downloads 109904 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy
Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay
Abstract:
Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.Keywords: trauma, coagulopathy, prediction, model
Procedia PDF Downloads 174903 Concussion: Clinical and Vocational Outcomes from Sport Related Mild Traumatic Brain Injury
Authors: Jack Nash, Chris Simpson, Holly Hurn, Ronel Terblanche, Alan Mistlin
Abstract:
There is an increasing incidence of mild traumatic brain injury (mTBI) cases throughout sport and with this, a growing interest from governing bodies to ensure these are managed appropriately and player welfare is prioritised. The Berlin consensus statement on concussion in sport recommends a multidisciplinary approach when managing those patients who do not have full resolution of mTBI symptoms. There are as of yet no standardised guideline to follow in the treatment of complex cases mTBI in athletes. The aim of this project was to analyse the outcomes, both clinical and vocational, of all patients admitted to the mild Traumatic Brain Injury (mTBI) service at the UK’s Defence Military Rehabilitation Centre Headley Court between 1st June 2008 and 1st February 2017, as a result of a sport induced injury, and evaluate potential predictive indicators of outcome. Patients were identified from a database maintained by the mTBI service. Clinical and occupational outcomes were ascertained from medical and occupational employment records, recorded prospectively, at time of discharge from the mTBI service. Outcomes were graded based on the vocational independence scale (VIS) and clinical documentation at discharge. Predictive indicators including referral time, age at time of injury, previous mental health diagnosis and a financial claim in place at time of entry to service were assessed using logistic regression. 45 Patients were treated for sport-related mTBI during this time frame. Clinically 96% of patients had full resolution of their mTBI symptoms after input from the mTBI service. 51% of patients returned to work at their previous vocational level, 4% had ongoing mTBI symptoms, 22% had ongoing physical rehabilitation needs, 11% required mental health input and 11% required further vestibular rehabilitation. Neither age, time to referral, pre-existing mental health condition nor compensation seeking had a significant impact on either vocational or clinical outcome in this population. The vast majority of patients reviewed in the mTBI clinic had persistent symptoms which could not be managed in primary care. A consultant-led, multidisciplinary approach to the diagnosis and management of mTBI has resulted in excellent clinical outcomes in these complex cases. High levels of symptom resolution suggest that this referral and treatment pathway is successful and is a model which could be replicated in other organisations with consultant led input. Further understanding of both predictive and individual factors would allow clinicians to focus treatments on those who are most likely to develop long-term complications following mTBI. A consultant-led, multidisciplinary service ensures a large number of patients will have complete resolution of mTBI symptoms after sport-related mTBI. Further research is now required to ascertain the key predictive indicators of outcome following sport-related mTBI.Keywords: brain injury, concussion, neurology, rehabilitation, sports injury
Procedia PDF Downloads 156