Search results for: infinite feature selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3932

Search results for: infinite feature selection

302 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 361
301 Mathematical Toolbox for editing Equations and Geometrical Diagrams and Graphs

Authors: Ayola D. N. Jayamaha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Currently there are lot of educational tools designed for mathematics. Open source software such as GeoGebra and Octave are bulky in their architectural structure. In addition, there is MathLab software, which facilitates much more than what we ask for. Many of the computer aided online grading and assessment tools require integrating editors to their software. However, there are not exist suitable editors that cater for all their needs in editing equations and geometrical diagrams and graphs. Some of the existing software for editing equations is Alfred’s Equation Editor, Codecogs, DragMath, Maple, MathDox, MathJax, MathMagic, MathFlow, Math-o-mir, Microsoft Equation Editor, MiraiMath, OpenOffice, WIRIS Editor and MyScript. Some of them are commercial, open source, supports handwriting recognition, mobile apps, renders MathML/LaTeX, Flash / Web based and javascript display engines. Some of the diagram editors are GeoKone.NET, Tabulae, Cinderella 1.4, MyScript, Dia, Draw2D touch, Gliffy, GeoGebra, Flowchart, Jgraph, JointJS, J painter Online diagram editor and 2D sketcher. All these software are open source except for MyScript and can be used for editing mathematical diagrams. However, they do not fully cater the needs of a typical computer aided assessment tool or Educational Platform for Mathematics. This solution provides a Web based, lightweight, easy to implement and integrate solution of an html5 canvas that renders on all of the modern web browsers. The scope of the project is an editor that covers equations and mathematical diagrams and drawings on the O/L Mathematical Exam Papers in Sri Lanka. Using the tool the students can enter any equation to the system which can be on an online remote learning platform. The users can also create and edit geometrical drawings, graphs and do geometrical constructions that require only Compass and Ruler from the Editing Interface provided by the Software. The special feature of this software is the geometrical constructions. It allows the users to create geometrical constructions such as angle bisectors, perpendicular lines, angles of 600 and perpendicular bisectors. The tool correctly imitates the functioning of rulers and compasses to create the required geometrical construction. Therefore, the users are able to do geometrical drawings on the computer successfully and we have a digital format of the geometrical drawing for further processing. Secondly, we can create and edit Venn Diagrams, color them and label them. In addition, the students can draw probability tree diagrams and compound probability outcome grids. They can label and mark regions within the grids. Thirdly, students can draw graphs (1st order and 2nd order). They can mark points on a graph paper and the system connects the dots to draw the graph. Further students are able to draw standard shapes such as circles and rectangles by selecting points on a grid or entering the parametric values.

Keywords: geometrical drawings, html5 canvas, mathematical equations, toolbox

Procedia PDF Downloads 377
300 Exploring the Use of Schoolgrounds for the Integration of Environmental and Sustainability Education in Natural and Social Sciences Pedagogy: A Case Study

Authors: Headman Hebe, Arnold Taringa

Abstract:

Background of the study: The benefits derived from Environmental and Sustainability Education (ESE) go beyond obtaining knowledge about the environment and the impact of human beings on the environment. Hence, it is sensible to expose learners to various resources that could enable meaningful environment-inclined pedagogy. The schoolgrounds, where they are utilised to promote ESE, benefit holistic learner development. However, empirical evidence, globally, suggests that young children’s contact with nature is declining due to urbanization, safety concerns by parents/guardians, and greater dependency on technology. Modern children spend much time on videogames and social media with very little time in the natural environment. Furthermore, national education departments in numerous countries have made tangible efforts to embed environmental and place-based learning to their school curricula. South Africa is one of those countries whose national school education curriculum advocates for ESE in pedagogy. Nevertheless, there is paucity of research conducted in South Africa on schoolgrounds as potential enablers of ESE and tools to foster a connection between youngsters and the natural environment. Accordingly, this study was essential as it seeks to determine the extent to which environmental learning is accommodated in pedagogy. Significantly, it investigates efforts made to use schoolgrounds for pedagogical purposes to connect children with the natural environment. Therefore, this study was conducted to investigate the accessibility and use of schoolgrounds for environment-inclined pedagogy in Natural and Social Sciences in two schools located in the Mpumalanga Province of South Africa. It tries to answer the question: To what extent are schoolgrounds used to promote environmental and sustainability education in the selected schools?The sub-questions: How do teachers and learners perceive the use of schoolgrounds for environmental and sustainability education activities? How does the organization of schoolgrounds offer opportunities for environmental education activities and accessibility for learners? Research method: This qualitative–interpretive case study used purposive and convenient sampling for participant selection. Forty-six respondents: 40 learners (twenty grade 7 learners per school), 2 school principals and 4 grade 7 participated in this study. Data collection tools were observations, interviews, audio-visual recordings and questionnaires while data analysis was done thematically. Major findings: The findings of the study point to: The lack of teacher training and infrastructure in the schoolgrounds and, no administrative support. Unclear curriculum guidelines on the use of schoolgrounds for ESE. The availability various elements in the schoolgrounds that could aid ESE activities. Learners denied access to certain parts of the schoolgrounds. Lack of time and curriculum demands constrain teachers from using schoolgrounds.

Keywords: affordances, environment and sustainability education, experiential learning, schoolgrounds

Procedia PDF Downloads 64
299 Semiotics of the New Commercial Music Paradigm

Authors: Mladen Milicevic

Abstract:

This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.

Keywords: music, semiology, commercial, taste

Procedia PDF Downloads 393
298 A Preliminary Analysis of The Effect After Cochlear Implantation in the Unilateral Hearing Loss

Authors: Haiqiao Du, Qian Wang, Shuwei Wang, Jianan Li

Abstract:

Purpose: The aim is to evaluate the effect of cochlear implantation (CI) in patients with unilateral hearing loss, with a view to providing data support for the selection of therapeutic interventions for patients with single-sided deafness (SSD)/asymmetric hearing loss (AHL) and the broadening of the indications for CI. Methods: The study subjects were patients with unilateral hearing loss who underwent cochlear implantation surgery in our hospital in August 2022 and were willing to cooperate with the test and were divided into 2 groups: SSD group and AHL group. The enrolled patients were followed up for hearing level, tinnitus changes, speech recognition ability, sound source localization ability, and quality of life at five-time points: preoperatively, and 1, 3, 6, and 12 months after postoperative start-up. Results: As of June 30, 2024, a total of nine patients completed follow-up, including four in the SSD group and five in the AHL group. The mean postoperative hearing aid thresholds on the CI side were 31.56 dB HL and 34.75 dB HL in the two groups, respectively. Of the four patients with preoperative tinnitus symptoms (three patients in the SSD group and one patient in the AHL group), all showed a degree of reduction in Tinnitus Handicap Inventory (THI) scores, except for one patient who showed no change. In both the SSD and AHL groups, the sound source localization results (expressed as RMS error values, with smaller values indicating better ability) were 66.87° and 77.41° preoperatively and 29.34° and 54.60° 12 months after postoperative start-up, respectively, which showed that the ability to localize the sound source improved significantly with longer implantation time. The level of speech recognition was assessed by 3 test methods: speech recognition rate of monosyllabic words in a quiet environment and speech recognition rate of different sound source directions at 0° and 90° (implantation side) in a noisy environment. The results of the 3 tests were 99.0%, 72.0%, and 36.0% in the preoperative SSD group and 96.0%, 83.6%, and 73.8% in the AHL group, respectively, whereas they fluctuated in the postoperative period 3 months after start-up, and stabilized at 12 months after start-up to 99.0%, 100.0%, and 100.0% in the SSD group and 99.5%, 96.0%, and 99.0%. Quality of life was subjectively evaluated by three tests: the Speech Spatial Quality of Sound Auditory Scale (SSQ-12), the Quality-of-Life Bilateral Listening Questionnaire (QLBHE), and the Nijmegen Cochlear Implantation Inventory (NCIQ). The results of the SSQ-12 (with a 10-point score out of 10) showed that the scores of preoperative and postoperative 12 months after start-up were 6.35 and 6.46 in the SSD group, while they were 5.61 and 9.83 in the AHL group. The QLBHE scores (100 points out of 100) were 61.0 and 76.0 in the SSD group and 53.4 and 63.7 in the AHL group for the preoperative versus the postoperative 12 months after start-up. Conclusion: Patients with unilateral hearing loss can benefit from cochlear implantation: CI implantation is effective in compensating for the hearing on the affected side and reduces the accompanying tinnitus symptoms; there is a significant improvement in sound source localization and speech recognition in the presence of noise; and the quality of life is improved.

Keywords: single-sided deafness, asymmetric hearing loss, cochlear implant, unilateral hearing loss

Procedia PDF Downloads 14
297 Evolutionary Advantages of Loneliness with an Agent-Based Model

Authors: David Gottlieb, Jason Yoder

Abstract:

The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.

Keywords: agent-based, behavior, evolution, loneliness, social

Procedia PDF Downloads 96
296 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 78
295 Global Experiences in Dealing with Biological Epidemics with an Emphasis on COVID-19 Disease: Approaches and Strategies

Authors: Marziye Hadian, Alireza Jabbari

Abstract:

Background: The World Health Organization has identified COVID-19 as a public health emergency and is urging governments to stop the virus transmission by adopting appropriate policies. In this regard, authorities have taken different approaches to cut the chain or controlling the spread of the disease. Now, the questions we are facing include what these approaches are? What tools should be used to implement each preventive protocol? In addition, what is the impact of each approach? Objective: The aim of this study was to determine the approaches to biological epidemics and related prevention tools with an emphasis on COVID-19 disease. Data sources: Databases including ISI web of science, PubMed, Scopus, Science Direct, Ovid, and ProQuest were employed for data extraction. Furthermore, authentic sources such as the WHO website, the published reports of relevant countries, as well as the Worldometer website were evaluated for gray studies. The time-frame of the study was from 1 December 2019 to 30 May 2020. Methods: The present study was a systematic study of publications related to the prevention strategies for the COVID-19 disease. The study was carried out based on the PRISMA guidelines and CASP for articles and AACODS for grey literature. Results: The study findings showed that in order to confront the COVID-19 epidemic, in general, there are three approaches of "mitigation", "active control" and "suppression" and four strategies of "quarantine", "isolation", "social distance" and "lockdown" in both individual and social dimensions to deal with epidemics. Selection and implementation of each approach requires specific strategies and has different effects when it comes to controlling and inhibiting the disease. Key finding: One possible approach to control the disease is to change individual behavior and lifestyle. In addition to prevention strategies, use of masks, observance of personal hygiene principles such as regular hand washing and non-contact of contaminated hands with the face, as well as an observance of public health principles such as sneezing and coughing etiquettes, safe extermination of personal protective equipment, must be strictly observed. Have not been included in the category of prevention tools. However, it has a great impact on controlling the epidemic, especially the new coronavirus epidemic. Conclusion: Although the use of different approaches to control and inhibit biological epidemics depends on numerous variables, however, despite these requirements, global experience suggests that some of these approaches are ineffective. The use of previous experiences in the world, along with the current experiences of countries, can be very helpful in choosing the accurate approach for each country in accordance with the characteristics of that country and lead to the reduction of possible costs at the national and international levels.

Keywords: novel corona virus, COVID-19, approaches, prevention tools, prevention strategies

Procedia PDF Downloads 126
294 The Effect of Disseminating Basic Knowledge on Radiation in Emergency Distance Learning of COVID-19

Authors: Satoko Yamasaki, Hiromi Kawasaki, Kotomi Yamashita, Susumu Fukita, Kei Sounai

Abstract:

People are susceptible to rumors when the cause of their health problems is unknown or invisible. In order for individuals to be unaffected by rumors, they need basic knowledge and correct information. Community health nursing classes use cases where basic knowledge of radiation can be utilized on a regular basis, thereby teaching that basic knowledge is important in preventing anxiety caused by rumors. Nursing students need to learn that preventive activities are essential for public health nursing care. This is the same methodology used to reduce COVID-19 anxiety among individuals. This study verifies the learning effect concerning the basic knowledge of radiation necessary for case consultation by emergency distance learning. Sixty third-year nursing college students agreed to participate in this research. The knowledge tests conducted before and after classes were compared, with the chi-square test used for testing. There were five knowledge questions regarding distance lessons. This was considered to be 5% significant. The students’ reports which describe the results of responding to health consultations, were analyzed qualitatively and descriptively. In this case study, a person living in an area not affected by radiation was anxious about drinking water and, thus, consulted with a student. The contents of the lecture were selected the minimum amount of knowledge used for the answers of the consultant; specifically hot spots, internal exposure risk, food safety, characteristics of cesium-137, and precautions for counselors. Before taking the class, the most correctly answered question by students concerned daily behavior at risk of internal exposure (52.2%). The question with the fewest correct answers was the selection of places that are likely to be hot spots (3.4%). All responses increased significantly after taking the class (p < 0.001). The answers to the counselors, as written by the students, were 'Cesium is strongly bound to the soil, so it is difficult to transfer to water' and 'Water quality test results of tap water are posted on the city's website.' These were concrete answers obtained by using specialized knowledge. Even in emergency distance learning, the students gained basic knowledge regarding radiation and created a document to utilize said knowledge while assuming the situation concretely. It was thought that the flipped classroom method, even if conducted remotely, could maintain students' learning. It was thought that setting specific knowledge and scenes to be used would enhance the learning effect. By changing the case to concern that of the anxiety caused by infectious diseases, students may be able to effectively gain the basic knowledge to decrease the anxiety of residents due to infectious diseases.

Keywords: effect of class, emergency distance learning, nursing student, radiation

Procedia PDF Downloads 114
293 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 356
292 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 34
291 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 246
290 The Role of Supply Chain Agility in Improving Manufacturing Resilience

Authors: Maryam Ziaee

Abstract:

This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.

Keywords: agility, manufacturing, resilience, supply chain

Procedia PDF Downloads 90
289 Evaluation of Yield and Yield Components of Malaysian Palm Oil Board-Senegal Oil Palm Germplasm Using Multivariate Tools

Authors: Khin Aye Myint, Mohd Rafii Yusop, Mohd Yusoff Abd Samad, Shairul Izan Ramlee, Mohd Din Amiruddin, Zulkifli Yaakub

Abstract:

The narrow base of genetic is the main obstacle of breeding and genetic improvement in oil palm industry. In order to broaden the genetic bases, the Malaysian Palm Oil Board has been extensively collected wild germplasm from its original area of 11 African countries which are Nigeria, Senegal, Gambia, Guinea, Sierra Leone, Ghana, Cameroon, Zaire, Angola, Madagascar, and Tanzania. The germplasm collections were established and maintained as a field gene bank in Malaysian Palm Oil Board (MPOB) Research Station in Kluang, Johor, Malaysia to conserve a wide range of oil palm genetic resources for genetic improvement of Malaysian oil palm industry. Therefore, assessing the performance and genetic diversity of the wild materials is very important for understanding the genetic structure of natural oil palm population and to explore genetic resources. Principal component analysis (PCA) and Cluster analysis are very efficient multivariate tools in the evaluation of genetic variation of germplasm and have been applied in many crops. In this study, eight populations of MPOB-Senegal oil palm germplasm were studied to explore the genetic variation pattern using PCA and cluster analysis. A total of 20 yield and yield component traits were used to analyze PCA and Ward’s clustering using SAS 9.4 version software. The first four principal components which have eigenvalue >1 accounted for 93% of total variation with the value of 44%, 19%, 18% and 12% respectively for each principal component. PC1 showed highest positive correlation with fresh fruit bunch (0.315), bunch number (0.321), oil yield (0.317), kernel yield (0.326), total economic product (0.324), and total oil (0.324) while PC 2 has the largest positive association with oil to wet mesocarp (0.397) and oil to fruit (0.458). The oil palm population were grouped into four distinct clusters based on 20 evaluated traits, this imply that high genetic variation existed in among the germplasm. Cluster 1 contains two populations which are SEN 12 and SEN 10, while cluster 2 has only one population of SEN 3. Cluster 3 consists of three populations which are SEN 4, SEN 6, and SEN 7 while SEN 2 and SEN 5 were grouped in cluster 4. Cluster 4 showed the highest mean value of fresh fruit bunch, bunch number, oil yield, kernel yield, total economic product, and total oil and Cluster 1 was characterized by high oil to wet mesocarp, and oil to fruit. The desired traits that have the largest positive correlation on extracted PCs could be utilized for the improvement of oil palm breeding program. The populations from different clusters with the highest cluster means could be used for hybridization. The information from this study can be utilized for effective conservation and selection of the MPOB-Senegal oil palm germplasm for the future breeding program.

Keywords: cluster analysis, genetic variability, germplasm, oil palm, principal component analysis

Procedia PDF Downloads 164
288 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 73
287 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 57
286 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 68
285 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 82
284 Field-Testing a Digital Music Notebook

Authors: Rena Upitis, Philip C. Abrami, Karen Boese

Abstract:

The success of one-on-one music study relies heavily on the ability of the teacher to provide sufficient direction to students during weekly lessons so that they can successfully practice from one lesson to the next. Traditionally, these instructions are given in a paper notebook, where the teacher makes notes for the students after describing a task or demonstrating a technique. The ability of students to make sense of these notes varies according to their understanding of the teacher’s directions, their motivation to practice, their memory of the lesson, and their abilities to self-regulate. At best, the notes enable the student to progress successfully. At worst, the student is left rudderless until the next lesson takes place. Digital notebooks have the potential to provide a more interactive and effective bridge between music lessons than traditional pen-and-paper notebooks. One such digital notebook, Cadenza, was designed to streamline and improve teachers’ instruction, to enhance student practicing, and to provide the means for teachers and students to communicate between lessons. For example, Cadenza contains a video annotator, where teachers can offer real-time guidance on uploaded student performances. Using the checklist feature, teachers and students negotiate the frequency and type of practice during the lesson, which the student can then access during subsequent practice sessions. Following the tenets of self-regulated learning, goal setting and reflection are also featured. Accordingly, the present paper addressed the following research questions: (1) How does the use of the Cadenza digital music notebook engage students and their teachers?, (2) Which features of Cadenza are most successful?, (3) Which features could be improved?, and (4) Is student learning and motivation enhanced with the use of the Cadenza digital music notebook? The paper describes the results 10 months of field-testing of Cadenza, structured around the four research questions outlined. Six teachers and 65 students took part in the study. Data were collected through video-recorded lesson observations, digital screen captures, surveys, and interviews. Standard qualitative protocols for coding results and identifying themes were employed to analyze the results. The results consistently indicated that teachers and students embraced the digital platform offered by Cadenza. The practice log and timer, the real-time annotation tool, the checklists, the lesson summaries, and the commenting features were found to be the most valuable functions, by students and teachers alike. Teachers also reported that students progressed more quickly with Cadenza, and received higher results in examinations than those students who were not using Cadenza. Teachers identified modifications to Cadenza that would make it an even more powerful way to support student learning. These modifications, once implemented, will move the tool well past its traditional notebook uses to new ways of motivating students to practise between lessons and to communicate with teachers about their learning. Improvements to the tool called for by the teachers included the ability to duplicate archived lessons, allowing for split screen viewing, and adding goal setting to the teacher window. In the concluding section, proposed modifications and their implications for self-regulated learning are discussed.

Keywords: digital music technologies, electronic notebooks, self-regulated learning, studio music instruction

Procedia PDF Downloads 254
283 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach

Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta

Abstract:

Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.

Keywords: compostela, literary geography, literary cartography, tourism

Procedia PDF Downloads 392
282 Features of Fossil Fuels Generation from Bazhenov Formation Source Rocks by Hydropyrolysis

Authors: Anton G. Kalmykov, Andrew Yu. Bychkov, Georgy A. Kalmykov

Abstract:

Nowadays, most oil reserves in Russia and all over the world are hard to recover. That is the reason oil companies are searching for new sources for hydrocarbon production. One of the sources might be high-carbon formations with unconventional reservoirs. Bazhenov formation is a huge source rock formation located in West Siberia, which contains unconventional reservoirs on some of the areas. These reservoirs are formed by secondary processes with low predicting ratio. Only one of five wells is drilled through unconventional reservoirs, in others kerogen has low thermal maturity, and they are of low petroliferous. Therefore, there was a request for tertiary methods for in-situ cracking of kerogen and production of oil. Laboratory experiments of Bazhenov formation rock hydrous pyrolysis were used to investigate features of the oil generation process. Experiments on Bazhenov rocks with a different mineral composition (silica concentration from 15 to 90 wt.%, clays – 5-50 wt.%, carbonates – 0-30 wt.%, kerogen – 1-25 wt.%) and thermal maturity (from immature to late oil window kerogen) were performed in a retort under reservoir conditions. Rock samples of 50 g weight were placed in retort, covered with water and heated to the different temperature varied from 250 to 400°C with the durability of the experiments from several hours to one week. After the experiments, the retort was cooled to room temperature; generated hydrocarbons were extracted with hexane, then separated from the solvent and weighted. The molecular composition of this synthesized oil was then investigated via GC-MS chromatography Characteristics of rock samples after the heating was measured via the Rock-Eval method. It was found, that the amount of synthesized oil and its composition depending on the experimental conditions and composition of rocks. The highest amount of oil was produced at a temperature of 350°C after 12 hours of heating and was up to 12 wt.% of initial organic matter content in the rocks. At the higher temperatures and within longer heating time secondary cracking of generated hydrocarbons occurs, the mass of produced oil is lowering, and the composition contains more hydrocarbons that need to be recovered by catalytical processes. If the temperature is lower than 300°C, the amount of produced oil is too low for the process to be economically effective. It was also found that silica and clay minerals work as catalysts. Selection of heating conditions allows producing synthesized oil with specified composition. Kerogen investigations after heating have shown that thermal maturity increases, but the yield is only up to 35% of the maximum amount of synthetic oil. This yield is the result of gaseous hydrocarbons formation due to secondary cracking and aromatization and coaling of kerogen. Future investigations will allow the increase in the yield of synthetic oil. The results are in a good agreement with theoretical data on kerogen maturation during oil production. Evaluated trends could be tooled up for in-situ oil generation by shale rocks thermal action.

Keywords: Bazhenov formation, fossil fuels, hydropyrolysis, synthetic oil

Procedia PDF Downloads 114
281 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection

Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi

Abstract:

During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.

Keywords: coating, stainless steel, tribology, wear

Procedia PDF Downloads 150
280 The Use of Brachytherapy in the Treatment of Liver Metastases: A Systematic Review

Authors: Mateusz Bilski, Jakub Klas, Emilia Kowalczyk, Sylwia Koziej, Katarzyna Kulszo, Ludmiła Grzybowska- Szatkowska

Abstract:

Background: Liver metastases are a common complication of primary solid tumors and sig-nificantly reduce patient survival. In the era of increasing diagnosis of oligometastatic disease and oligoprogression, methods of local treatment of metastases, i.e. MDT, are becoming more important. Implementation of such treatment can be considered for liver metastases, which are a common complication of primary solid tumors and significantly reduce patient survival. To date, the mainstay of treatment for oligometastatic disease has been surgical resection, but not all patients qualify for the procedure. As an alternative to surgical resection, radiotherapy techniques have become available, including stereotactic body radiation therapy (SBRT) or high-dose interstitial brachytherapy (iBT). iBT is an invasive method that emits very high doses of radiation from the inside of the tumor to the outside. This technique provides better tumor coverage than SBRT while having little impact on surrounding healthy tissue and elim-inates some concerns involving respiratory motion. Methods: We conducted a systematic re-view of the scientific literature on the use of brachytherapy in the treatment of liver metasta-ses from 2018 - 2023 using PubMed and ResearchGate browsers according to PRISMA rules. Results: From 111 articles, 18 publications containing information on 729 patients with liver metastases were selected. iBT has been shown to provide high rates of tumor control. Among 14 patients with 54 unresectable RCC liver metastases, after iBT LTC was 92.6% during a median follow-up of 10.2 months, PFS was 3.4 months. In analysis of 167 patients after treatment with a single fractional dose of 15-25 Gy with brachytherapy at 6- and 12-month follow-up, LRFS rates of 88,4-88.7% and 70.7 - 71,5%, PFS of 78.1 and 53.8%, and OS of 92.3 - 96.7% and 76,3% - 79.6%, respectively, were achieved. No serious complications were observed in all patients. Distant intrahepatic progression occurred later in patients with unre-sectable liver metastases after brachytherapy (PFS: 19.80 months) than in HCC patients (PFS: 13.50 months). A significant difference in LRFS between CRC patients (84.1% vs. 50.6%) and other histologies (92.4% vs. 92.4%) was noted, suggesting a higher treatment dose is necessary for CRC patients. The average target dose for metastatic colorectal cancer was 40 - 60 Gy (compared to 100 - 250 Gy for HCC). To better assess sensitivity to therapy and pre-dict side effects, it has been suggested that humoral mediators be evaluated. It was also shown that baseline levels of TNF-α, MCP-1 and VEGF, as well as NGF and CX3CL corre-lated with both tumor volume and radiation-induced liver damage, one of the most serious complications of iBT, indicating their potential role as biomarkers of therapy outcome. Con-clusions: The use of brachytherapy methods in the treatment of liver metastases of various cancers appears to be an interesting and relatively safe therapeutic method alternative to sur-gery. An important challenge remains the selection of an appropriate brachytherapy method and radiation dose for the corresponding initial tumor type from which the metastasis origi-nated.

Keywords: liver metastases, brachytherapy, CT-HDRBT, iBT

Procedia PDF Downloads 114
279 Impact of Informal Institutions on Development: Analyzing the Socio-Legal Equilibrium of Relational Contracts in India

Authors: Shubhangi Roy

Abstract:

Relational Contracts (informal understandings not enforceable by law) are a common feature of most economies. However, their dominance is higher in developing countries. Such informality of economic sectors is often co-related to lower economic growth. The aim of this paper is to investigate whether informal arrangements i.e. relational contracts are a cause or symptom of lower levels of economic and/or institutional development. The methodology followed involves an initial survey of 150 test subjects in Northern India. The subjects are all members of occupations where they frequently transact ensuring uniformity in transaction volume. However, the subjects are from varied socio-economic backgrounds to ensure sufficient variance in transaction values allowing us to understand the relationship between the amount of money involved to the method of transaction used, if any. Questions asked are quantitative and qualitative with an aim to observe both the behavior and motivation behind such behavior. An overarching similarity observed during the survey across all subjects’ responses is that in an economy like India with pervasive corruption and delayed litigation, economy participants have created alternative social sanctions to deal with non-performers. In a society that functions predominantly on caste, class and gender classifications, these sanctions could, in fact, be more cumbersome for a potential rule-breaker than the legal ramifications. It, therefore, is a symptom of weak formal regulatory enforcement and dispute settlement mechanism. Additionally, the study bifurcates such informal arrangements into two separate systems - a) when it exists in addition to and augments a legal framework creating an efficient socio-legal equilibrium or; b) in conflict with the legal system in place. This categorization is an important step in regulating informal arrangements. Instead of considering the entire gamut of such arrangements as counter-development, it helps decision-makers understand when to dismantle (latter) and when to pivot around existing informal systems (former). The paper hypothesizes that those social arrangements that support the formal legal frameworks allow for cheaper enforcement of regulations with lower enforcement costs burden on the state mechanism. On the other hand, norms which contradict legal rules will undermine the formal framework. Law infringement, in presence of these norms, will have no impact on the reputation of the business or individual outside of the punishment imposed under the law. It is especially exacerbated in the Indian legal system where enforcement of penalties for non-performance of contracts is low. In such a situation, the social norm will be adhered to more strictly by the individuals rather than the legal norms. This greatly undermines the role of regulations. The paper concludes with recommendations that allow policy-makers and legal systems to encourage the former category of informal arrangements while discouraging norms that undermine legitimate policy objectives. Through this investigation, we will be able to expand our understanding of tools of market development beyond regulations. This will allow academics and policymakers to harness social norms for less disruptive and more lasting growth.

Keywords: distribution of income, emerging economies, relational contracts, sample survey, social norms

Procedia PDF Downloads 165
278 The Effect of Rheological Properties and Spun/Meltblown Fiber Characteristics on “Hotmelt Bleed through” Behavior in High Speed Textile Backsheet Lamination Process

Authors: Kinyas Aydin, Fatih Erguney, Tolga Ceper, Serap Ozay, Ipar N. Uzun, Sebnem Kemaloglu Dogan, Deniz Tunc

Abstract:

In order to meet high growth rates in baby diaper industry worldwide, the high-speed textile backsheet lamination lines have recently been introduced to the market for non-woven/film lamination applications. It is a process where two substrates are bonded to each other via hotmelt adhesive (HMA). Nonwoven (NW) lamination system basically consists of 4 components; polypropylene (PP) nonwoven, polyethylene (PE) film, HMA and applicator system. Each component has a substantial effect on the process efficiency of continuous line and final product properties. However, for a precise subject cover, we will be addressing only the main challenges and possible solutions in this paper. The NW is often produced by spunbond method (SSS or SMS configuration) and has a 10-12 gsm (g/m²) basis weight. The NW rolls can have a width and length up to 2.060 mm and 30.000 linear meters, respectively. The PE film is the 2ⁿᵈ component in TBS lamination, which is usually a 12-14 gsm blown or cast breathable film. HMA is a thermoplastic glue (mostly rubber based) that can be applied in a large range of viscosity ranges. The main HMA application technology in TBS lamination is the slot die application in which HMA is spread on the top of the NW along the whole width at high temperatures in the melt form. Then, the NW is passed over chiller rolls with a certain open time depending on the line speed. HMAs are applied at certain levels in order to provide a proper de-lamination strength in cross and machine directions to the entire structure. Current TBS lamination line speed and width can be as high as 800 m/min and 2100 mm, respectively. They also feature an automated web control tension system for winders and unwinders. In order to run a continuous trouble-free mass production campaign on the fast industrial TBS lines, rheological properties of HMAs and micro-properties of NWs can have adverse effects on the line efficiency and continuity. NW fiber orientation and fineness, as well as spun/melt blown composition fabric micro-level properties, are the significant factors to affect the degree of “HMA bleed through.” As a result of this problem, frequent line stops are observed to clean the glue that is being accumulated on the chiller rolls, which significantly reduces the line efficiency. HMA rheology is also important and to eliminate any bleed through the problem; one should have a good understanding of rheology driven potential complications. So, the applied viscosity/temperature should be optimized in accordance with the line speed, line width, NW characteristics and the required open time for a given HMA formulation. In this study, we will show practical aspects of potential preventative actions to minimize the HMA bleed through the problem, which may stem from both HMA rheological properties and NW spun melt/melt blown fiber characteristics.

Keywords: breathable, hotmelt, nonwoven, textile backsheet lamination, spun/melt blown

Procedia PDF Downloads 359
277 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites

Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek

Abstract:

Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.

Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding

Procedia PDF Downloads 209
276 Interconnections of Circular Economy, Circularity, and Sustainability: A Systematic Review and Conceptual Framework

Authors: Anteneh Dagnachew Sewenet, Paola Pisano

Abstract:

The concept of circular economy, circularity, and sustainability are interconnected and promote a more sustainable future. However, previous studies have mainly focused on each concept individually, neglecting the relationships and gaps in the existing literature. This study aims to integrate and link these concepts to expand the theoretical and practical methods of scholars and professionals in pursuit of sustainability. The aim of this systematic literature review is to comprehensively analyze and summarize the interconnections between circular economy, circularity, and sustainability. Additionally, it seeks to develop a conceptual framework that can guide practitioners and serve as a basis for future research. The review employed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. A total of 78 articles were analyzed, utilizing the Scopus and Web of Science databases. The analysis involved summarizing and systematizing the conceptualizations of circularity and its relationship with the circular economy and long-term sustainability. The review provided a comprehensive overview of the interconnections between circular economy, circularity, and sustainability. Key themes, theoretical frameworks, empirical findings, and conceptual gaps in the literature were identified. Through a rigorous analysis of scholarly articles, the study highlighted the importance of integrating these concepts for a more sustainable future. This study contributes to the existing literature by integrating and linking the concepts of circular economy, circularity, and sustainability. It expands the theoretical understanding of how these concepts relate to each other and provides a conceptual framework that can guide future research in this field. The findings emphasize the need for a holistic approach in achieving sustainability goals. The data collection for this review involved identifying relevant articles from the Scopus and Web of Science databases. The selection of articles was made based on predefined inclusion and exclusion criteria. The PRISMA protocol guided the systematic analysis of the selected articles, including summarizing and systematizing their content. This study addressed the question of how circularity is conceptualized and related to both the circular economy and long-term sustainability. It aimed to identify the interconnections between these concepts and bridge the gap in the existing literature. The review provided a comprehensive analysis of the interconnections between the circular economy, circularity, and sustainability. It presented a conceptual framework that can guide practitioners in implementing circular economy strategies and serve as a basis for future research. By integrating these concepts, scholars, and professionals can enhance the theoretical and practical methods in pursuit of a more sustainable future. The findings emphasize the importance of taking a holistic approach to achieve sustainability goals and highlight conceptual gaps that can be addressed in future studies.

Keywords: circularity, circular economy, sustainability, innovation

Procedia PDF Downloads 106
275 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization

Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo

Abstract:

Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.

Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy

Procedia PDF Downloads 147
274 Creating Moments and Memories: An Evaluation of the Starlight 'Moments' Program for Palliative Children, Adolescents and Their Families

Authors: C. Treadgold, S. Sivaraman

Abstract:

The Starlight Children's Foundation (Starlight) is an Australian non-profit organisation that delivers programs, in partnership with health professionals, to support children, adolescents, and their families who are living with a serious illness. While supporting children and adolescents with life-limiting conditions has always been a feature of Starlight's work, providing a dedicated program, specifically targeting and meeting the needs of the paediatric palliative population, is a recent area of focus. Recognising the challenges in providing children’s palliative services, Starlight initiated a research and development project to better understand and meet the needs of this group. The aim was to create a program which enhances the wellbeing of children, adolescents, and their families receiving paediatric palliative care in their community through the provision of on-going, tailored, positive experiences or 'moments'. This paper will present the results of the formative evaluation of this unique program, highlighting the development processes and outcomes of the pilot. The pilot was designed using an innovation methodology, which included a number of research components. There was a strong belief that it needed to be delivered in partnership with a dedicated palliative care team, helping to ensure the best interests of the family were always represented. This resulted in Starlight collaborating with both the Victorian Paediatric Palliative Care Program (VPPCP) at the Royal Children's Hospital, Melbourne, and the Sydney Children's Hospital Network (SCHN) to pilot the 'Moments' program. As experts in 'positive disruption', with a long history of collaborating with health professionals, Starlight was well placed to deliver a program which helps children, adolescents, and their families to experience moments of joy, connection and achieve their own sense of accomplishment. Building on Starlight’s evidence-based approach and experience in creative service delivery, the program aims to use the power of 'positive disruption' to brighten the lives of this group and create important memories. The clinical and Starlight team members collaborate to ensure that the child and family are at the centre of the program. The design of each experience is specific to their needs and ensures the creation of positive memories and family connection. It aims for each moment to enhance quality of life. The partnership with the VPPCP and SCHN has allowed the program to reach families across metropolitan and regional locations. In late 2019 a formative evaluation of the pilot was conducted utilising both quantitative and qualitative methodologies to document both the delivery and outcomes of the program. Central to the evaluation was the interviews conducted with both clinical teams and families in order to gain a comprehensive understanding of the impact of and satisfaction with the program. The findings, which will be shared in this presentation, provide practical insight into the delivery of the program, the key elements for its success with families, and areas which could benefit from additional research and focus. It will use stories and case studies from the pilot to highlight the impact of the program and discuss what opportunities, challenges, and learnings emerged.

Keywords: children, families, memory making, pediatric palliative care, support

Procedia PDF Downloads 99
273 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 152