Search results for: function of language
5569 Clustering-Based Threshold Model for Condition Rating of Concrete Bridge Decks
Authors: M. Alsharqawi, T. Zayed, S. Abu Dabous
Abstract:
To ensure safety and serviceability of bridge infrastructure, accurate condition assessment and rating methods are needed to provide basis for bridge Maintenance, Repair and Replacement (MRR) decisions. In North America, the common practices to assess condition of bridges are through visual inspection. These practices are limited to detect surface defects and external flaws. Further, the thresholds that define the severity of bridge deterioration are selected arbitrarily. The current research discusses the main deteriorations and defects identified during visual inspection and Non-Destructive Evaluation (NDE). NDE techniques are becoming popular in augmenting the visual examination during inspection to detect subsurface defects. Quality inspection data and accurate condition assessment and rating are the basis for determining appropriate MRR decisions. Thus, in this paper, a novel method for bridge condition assessment using the Quality Function Deployment (QFD) theory is utilized. The QFD model is designed to provide an integrated condition by evaluating both the surface and subsurface defects for concrete bridges. Moreover, an integrated condition rating index with four thresholds is developed based on the QFD condition assessment model and using K-means clustering technique. Twenty case studies are analyzed by applying the QFD model and implementing the developed rating index. The results from the analyzed case studies show that the proposed threshold model produces robust MRR recommendations consistent with decisions and recommendations made by bridge managers on these projects. The proposed method is expected to advance the state of the art of bridges condition assessment and rating.Keywords: concrete bridge decks, condition assessment and rating, quality function deployment, k-means clustering technique
Procedia PDF Downloads 2265568 Preserving Urban Cultural Heritage with Deep Learning: Color Planning for Japanese Merchant Towns
Authors: Dongqi Li, Yunjia Huang, Tomo Inoue, Kohei Inoue
Abstract:
With urbanization, urban cultural heritage is facing the impact and destruction of modernization and urbanization. Many historical areas are losing their historical information and regional cultural characteristics, so it is necessary to carry out systematic color planning for historical areas in conservation. As an early focus on urban color planning, Japan has a systematic approach to urban color planning. Hence, this paper selects five merchant towns from the category of important traditional building preservation areas in Japan as the subject of this study to explore the color structure and emotion of this type of historic area. First, the image semantic segmentation method identifies the buildings, roads, and landscape environments. Their color data were extracted for color composition and emotion analysis to summarize their common features. Second, the obtained Internet evaluations were extracted by natural language processing for keyword extraction. The correlation analysis of the color structure and keywords provides a valuable reference for conservation decisions for this historic area in the town. This paper also combines the color structure and Internet evaluation results with generative adversarial networks to generate predicted images of color structure improvements and color improvement schemes. The methods and conclusions of this paper can provide new ideas for the digital management of environmental colors in historic districts and provide a valuable reference for the inheritance of local traditional culture.Keywords: historic districts, color planning, semantic segmentation, natural language processing
Procedia PDF Downloads 915567 No Histological and Biochemical Changes Following Administration of Tenofovir Nanoparticles: Animal Model Study
Authors: Aniekan Peter, ECS Naidu, Edidiong Akang, U. Offor, R. Kalhapure, A. A. Chuturgoon, T. Govender, O. O. Azu
Abstract:
Introduction: Nano-drugs are novel innovations in the management of human immunodeficiency virus (HIV) pandemic, especially resistant strains of the virus in their sanctuary sites: testis and the brain. There are safety concerns to be addressed to achieve the full potential of this new drug delivery system. Aim of study: Our study was designed to investigate toxicity profile of Tenofovir Nanoparticle (TDF-N) synthesized by University of Kwazulu-Natal (UKZN) Nano-team for prevention and treatment of HIV infection. Methodology: Ten adult male Sprague-Dawley rats maintained at the Animal House of the Biomedical Resources Unit UKZN were used for the study. The animals were weighed and divided into two groups of 5 animal each. Control animals (A) were administered with normal saline. Therapeutic dose (4.3 mg/kg) of TDF-N was administered to group B. At the end of four weeks, animals were weighed and sacrificed. Liver and kidney were removed fixed in formal saline, processed and stained using H/E, PAS and MT stains for light microscopy. Serum was obtained for renal function test (RFT), liver function test (LFT) and full blood count (FBC) using appropriate analysers. Cellular measurements were done using ImageJ and Leica software 2.0. Data were analysed using graph pad 6, values < 0.05 were significant. Results: We reported no histological alterations in the liver, kidney, FBC, LFT and RFT between the TDF-N animals and saline control. There were no significant differences in weight, organo-somatic index and histological measurements in the treatment group when compared with saline control. Conclusion/recommendations: TDF-N is not toxic to the liver, kidney and blood cells in our study. More studies using human subjects is recommended.Keywords: tenofovir nanoparticles, liver, kidney, blood cells
Procedia PDF Downloads 1895566 CybeRisk Management in Banks: An Italian Case Study
Authors: E. Cenderelli, E. Bruno, G. Iacoviello, A. Lazzini
Abstract:
The financial sector is exposed to the risk of cyber-attacks like any other industrial sector. Furthermore, the topic of CybeRisk (cyber risk) has become particularly relevant given that Information Technology (IT) attacks have increased drastically in recent years, and cannot be stopped by single organizations requiring a response at international and national level. IT risk is never a matter purely for the IT manager, although he clearly plays a key role. A bank's risk management function requires a thorough understanding of the evolving risks as well as the tools and practical techniques available to address them. Upon the request of European and national legislation regarding CybeRisk in the financial system, banks are therefore called upon to strengthen the operational model for CybeRisk management. This will require an important change with a more intense collaboration with the structures that deal with information security for the development of an ad hoc system for the evaluation and control of this type of risk. The aim of the work is to propose a framework for the management and control of CybeRisk that will bridge the gap in the literature regarding the understanding and consideration of CybeRisk as an integral part of business management. The IT function has a strong relevance in the management of CybeRisk, which is perceived mainly as operational risk, but with a positive tendency on the part of risk management to the identification of CybeRisk assessment methods that are increasingly complete, quantitative and able to better describe the possible impacts on the business. The paper provides answers to the research questions: Is it possible to define a CybeRisk governance structure able to support the comparison between risk and security? How can the relationships between IT assets be integrated into a cyberisk assessment framework to guarantee a system of protection and risks control? From a methodological point of view, this research uses a case study approach. The choice of “Monte dei Paschi di Siena” was determined by the specific features of one of Italy’s biggest lenders. It is chosen to use an intensive research strategy: an in-depth study of reality. The case study methodology is an empirical approach to explore a complex and current phenomenon that develops over time. The use of cases has also the advantage of allowing the deepening of aspects concerning the "how" and "why" of contemporary events, on which the scholar has little control. The research bases on quantitative data and qualitative information obtained through semi-structured interviews of an open-ended nature and questionnaires to directors, members of the audit committee, risk, IT and compliance managers, and those responsible for internal audit function and anti-money laundering. The added value of the paper can be seen in the development of a framework based on a mapping of IT assets from which it is possible to identify their relationships for purposes of a more effective management and control of cyber risk.Keywords: bank, CybeRisk, information technology, risk management
Procedia PDF Downloads 2355565 The Facilitatory Effect of Phonological Priming on Visual Word Recognition in Arabic as a Function of Lexicality and Overlap Positions
Authors: Ali Al Moussaoui
Abstract:
An experiment was designed to assess the performance of 24 Lebanese adults (mean age 29:5 years) in a lexical decision making (LDM) task to find out how the facilitatory effect of phonological priming (PP) affects the speed of visual word recognition in Arabic as lexicality (wordhood) and phonological overlap positions (POP) vary. The experiment falls in line with previous research on phonological priming in the light of the cohort theory and in relation to visual word recognition. The experiment also departs from the research on the Arabic language in which the importance of the consonantal root as a distinct morphological unit is confirmed. Based on previous research, it is hypothesized that (1) PP has a facilitating effect in LDM with words but not with nonwords and (2) final phonological overlap between the prime and the target is more facilitatory than initial overlap. An LDM task was programmed on PsychoPy application. Participants had to decide if a target (e.g., bayn ‘between’) preceded by a prime (e.g., bayt ‘house’) is a word or not. There were 4 conditions: no PP (NP), nonwords priming nonwords (NN), nonwords priming words (NW), and words priming words (WW). The conditions were simultaneously controlled for word length, wordhood, and POP. The interstimulus interval was 700 ms. Within the PP conditions, POP was controlled for in which there were 3 overlap positions between the primes and the targets: initial (e.g., asad ‘lion’ and asaf ‘sorrow’), final (e.g., kattab ‘cause to write’ 2sg-mas and rattab ‘organize’ 2sg-mas), or two-segmented (e.g., namle ‘ant’ and naħle ‘bee’). There were 96 trials, 24 in each condition, using a within-subject design. The results show that concerning (1), the highest average reaction time (RT) is that in NN, followed firstly by NW and finally by WW. There is statistical significance only between the pairs NN-NW and NN-WW. Regarding (2), the shortest RT is that in the two-segmented overlap condition, followed by the final POP in the first place and the initial POP in the last place. The difference between the two-segmented and the initial overlap is significant, while other pairwise comparisons are not. Based on these results, PP emerges as a facilitatory phenomenon that is highly sensitive to lexicality and POP. While PP can have a facilitating effect under lexicality, it shows no facilitation in its absence, which intersects with several previous findings. Participants are found to be more sensitive to the final phonological overlap than the initial overlap, which also coincides with a body of earlier literature. The results contradict the cohort theory’s stress on the onset overlap position and, instead, give more weight to final overlap, and even heavier weight to the two-segmented one. In conclusion, this study confirms the facilitating effect of PP with words but not when stimuli (at least the primes and at most both the primes and targets) are nonwords. It also shows that the two-segmented priming is the most influential in LDM in Arabic.Keywords: lexicality, phonological overlap positions, phonological priming, visual word recognition
Procedia PDF Downloads 1895564 Simulation of GAG-Analogue Biomimetics for Intervertebral Disc Repair
Authors: Dafna Knani, Sarit S. Sivan
Abstract:
Aggrecan, one of the main components of the intervertebral disc (IVD), belongs to the family of proteoglycans (PGs) that are composed of glycosaminoglycan (GAG) chains covalently attached to a core protein. Its primary function is to maintain tissue hydration and hence disc height under the high loads imposed by muscle activity and body weight. Significant PG loss is one of the first indications of disc degeneration. A possible solution to recover disc functions is by injecting a synthetic hydrogel into the joint cavity, hence mimicking the role of PGs. One of the hydrogels proposed is GAG-analogues, based on sulfate-containing polymers, which are responsible for hydration in disc tissue. In the present work, we used molecular dynamics (MD) to study the effect of the hydrogel crosslinking (type and degree) on the swelling behavior of the suggested GAG-analogue biomimetics by calculation of cohesive energy density (CED), solubility parameter, enthalpy of mixing (ΔEmix) and the interactions between the molecules at the pure form and as a mixture with water. The simulation results showed that hydrophobicity plays an important role in the swelling of the hydrogel, as indicated by the linear correlation observed between solubility parameter values of the copolymers and crosslinker weight ratio (w/w); this correlation was found useful in predicting the amount of PEGDA needed for the desirable hydration behavior of (CS)₄-peptide. Enthalpy of mixing calculations showed that all the GAG analogs, (CS)₄ and (CS)₄-peptide are water-soluble; radial distribution function analysis revealed that they form interactions with water molecules, which is important for the hydration process. To conclude, our simulation results, beyond supporting the experimental data, can be used as a useful predictive tool in the future development of biomaterials, such as disc replacement.Keywords: molecular dynamics, proteoglycans, enthalpy of mixing, swelling
Procedia PDF Downloads 785563 Beta-Carotene Attenuates Cognitive and Hepatic Impairment in Thioacetamide-Induced Rat Model of Hepatic Encephalopathy via Mitigation of MAPK/NF-κB Signaling Pathway
Authors: Marawan Abd Elbaset Mohamed, Hanan A. Ogaly, Rehab F. Abdel-Rahman, Ahmed-Farid O.A., Marwa S. Khattab, Reham M. Abd-Elsalam
Abstract:
Liver fibrosis is a severe worldwide health concern due to various chronic liver disorders. Hepatic encephalopathy (HE) is one of its most common complications affecting liver and brain cognitive function. Beta-Carotene (B-Car) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. The study attempted to know B-Car neuroprotective potential against thioacetamide (TAA)-induced neurotoxicity and cognitive decline in HE in rats. Hepatic encephalopathy was induced by TAA (100 mg/kg, i.p.) three times per week for two weeks. B-Car was given orally (10 or 20 mg/kg) daily for two weeks after TAA injections. Organ body weight ratio, Serum transaminase activities, liver’s antioxidant parameters, ammonia, and liver histopathology were assessed. Also, the brain’s mitogen-activated protein kinase (MAPK), nuclear factor kappa B (NF-κB), antioxidant parameters, adenosine triphosphate (ATP), adenosine monophosphate (AMP), norepinephrine (NE), dopamine (DA), serotonin (5-HT), 5-hydroxyindoleacetic acid (5-HIAA) cAMP response element-binding protein (CREB) expression and B-cell lymphoma 2 (Bcl-2) expression were measured. The brain’s cognitive functions (Spontaneous locomotor activity, Rotarod performance test, Object recognition test) were assessed. B-Car prevented alteration of the brain’s cognitive function in a dose-dependent manner. The histopathological outcomes supported these biochemical evidences. Based on these results, it could be established that B-Car could be assigned to treat the brain’s neurotoxicity consequences of HE via downregualtion of MAPK/NF-κB signaling pathways.Keywords: beta-carotene, liver injury, MAPK, NF-κB, rat, thioacetamide
Procedia PDF Downloads 1575562 Transcriptome and Metabolome Analysis of a Tomato Solanum Lycopersicum STAYGREEN1 Null Line Generated Using Clustered Regularly Interspaced Short Palindromic Repeats/Cas9 Technology
Authors: Jin Young Kim, Kwon Kyoo Kang
Abstract:
The SGR1 (STAYGREEN1) protein is a critical regulator of plant leaves in chlorophyll degradation and senescence. The functions and mechanisms of tomato SGR1 action are poorly understood and worthy of further investigation. To investigate the function of the SGR1 gene, we generated a SGR1-knockout (KO) null line via clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9-mediated gene editing and conducted RNA sequencing and gas chromatography tandem mass spectrometry (GC-MS/MS) analysis to identify the differentially expressed genes. The SlSGR1 (Solanum lycopersicum SGR1) knockout null line clearly showed a turbid brown color with significantly higher chlorophyll and carotenoid content compared to wild-type (WT) fruit. Differential gene expression analysis revealed 728 differentially expressed genes (DEGs) between WT and sgr1 #1-6 line, including 263 and 465 downregulated and upregulated genes, respectively, for which fold change was >2, and the adjusted p-value was <0.05. Most of the DEGs were related to photosynthesis and chloroplast function. In addition, the pigment, carotenoid changes in sgr1 #1-6 line was accumulated of key primary metabolites such as sucrose and its derivatives (fructose, galactinol, raffinose), glycolytic intermediates (glucose, G6P, Fru6P) and tricarboxylic acid cycle (TCA) intermediates (malate and fumarate). Taken together, the transcriptome and metabolite profiles of SGR1-KO lines presented here provide evidence for the mechanisms underlying the effects of SGR1 and molecular pathways involved in chlorophyll degradation and carotenoid biosynthesis.Keywords: tomato, CRISPR/Cas9, null line, RNA-sequencing, metabolite profiling
Procedia PDF Downloads 1285561 Competitor Integration with Voice of Customer Ratings in QFD Studies Using Geometric Mean Based on AHP
Authors: Zafar Iqbal, Nigel P. Grigg, K. Govindaraju, Nicola M. Campbell-Allen
Abstract:
Quality Function Deployment (QFD) is structured approach. It has been used to improve the quality of products and process in a wide range of fields. Using this systematic tool, practitioners normally rank Voice of Customer ratings (VoCs) in order to produce Improvement Ratios (IRs) which become the basis for prioritising process / product design or improvement activities. In one matrix of the House of Quality (HOQ) competitors are rated. The method of obtaining improvement ratios (IRs) does not always integrate the competitors’ rating in a systematic way that fully utilises competitor rating information. This can have the effect of diverting QFD practitioners’ attention from a potentially important VOC to less important VOC. In order to enhance QFD analysis, we present a more systematic method for integrating competitor ratings, utilising the geometric mean of the customer rating matrix. In this paper we develop a new approach, based on the Analytic Hierarchy Process (AHP), in which we generating a matrix of multiple comparisons of all competitors, and derive a geometric mean for each competitor. For each VOC an improved IR is derived which-we argue herein - enhances the initial VOC importance ratings by integrating more information about competitor performance. In this way, our method can help overcome one of the possible shortcomings of QFD. We then use a published QFD example from literature as a case study to demonstrate the use of the new AHP-based IRs, and show how these can be used to re-rank existing VOCs to -arguably- better achieve the goal of customer satisfaction in relation VOC ratings and competitors’ rankings. We demonstrate how two dimensional AHP-based geometric mean derived from the multiple competitor comparisons matrix can be useful for analysing competitors’ rankings. Our method utilises an established methodology (AHP) applied within an established application (QFD), but in an original way (through the competitor analysis matrix), to achieve a novel improvement.Keywords: quality function deployment, geometric mean, improvement ratio, AHP, competitors ratings
Procedia PDF Downloads 3735560 Bronchospasm Analysis Following the Implementation of a Program of Maximum Aerobic Exercise in Active Men
Authors: Sajjad Shojaeidoust, Mohsen Ghanbarzadeh, Abdolhamid Habibi
Abstract:
Exercise-induced bronchospasm (EIB) is a transitory condition of airflow obstruction that is associated with physical activities. It is noted that high ventilation can lead to an increase in the heat and reduce in the moisture in airways resistance of trachea. Also causes of pathophysiological mechanism are EIB. Accordingly, studying some parameters of pulmonary function (FVC, FEV1) among active people seems quintessential. The aim of this study was to analyze bronchospasm following the implementation of a program of maximum aerobic exercise in active men at Chamran University of Ahwaz. Method: In this quasi-experimental study, the population consisted of all students at Chamran University. Among from 55 participants, of which, 15 were randomly selected as the experimental group. In this study, the size of the maximum oxygen consumption was initially measured, and then, based on the maximum oxygen consumed, the active individuals were identified. After five minutes’ warm-up, Strand treadmill exercise test was taken (one session) and pulmonary parameters were measured at both pre- and post-tests (spirometer). After data normalization using KS and non-normality of the data, the Wilcoxon test was used to analyze the data. The significance level for all statistical surveys was considered p≤0/05. Results: The results showed that the ventilation factors and bronchospasm (FVC, FEV1) in the pre-test and post-test resulted in no significant difference among the active people (p≥0/05). Discussion and conclusion: Based on the results observed in this study, it appears that pulmonary indices in active individuals increased after aerobic test. The increase in this indicator in active people is due to increased volume and elasticity of the lungs as well. In other words, pulmonary index is affected by rib muscles. It is considered that progress over respiratory muscle strength and endurance has raised FEV1 in the active cases.Keywords: aerobic active maximum, bronchospasm, pulmonary function, spirometer
Procedia PDF Downloads 2915559 Understanding Algerian International Student Mental Health Experiences in UK (United Kingdom) Universities: Difficulties of Disclosure, Help-Seeking and Coping Strategies
Authors: Nesrine Boussaoui
Abstract:
Background: International students often encounter challenges while studying in the UK, including communication and language barriers, lack of social networks, and socio-cultural differences that adversely impact on their mental health. For Algerian international students (AISs), these challenges may be heightened as English is not their first language and the culture of their homeland is substantially different from British culture, yet research has to incorporate their experiences and perspectives. Aim: The current study aimed to explore AISs’ 1) understandings of mental health; 2) issues of disclosure for mental health difficulties; and 3) mental health help-seeking and coping strategies. Method: In-depth, audio recorded semi-structured interviews (n = 20) with AISs in UK universities were conducted. An inductive, reflective thematic approach analysis was used. Finding: The following themes and associated sub-themes were developed: (1) Algerian cultural influences on mental health understanding(socio-cultural comparisons); (2) the paradox of the family (pressure vs. support); (3) stigma and fear of disclosure; (4) Barriers to formal help-seeking (informal disclosure as first step to seeking help); (5) Communication barriers (resort to mother tongue to disclose); (6) Self-reliance and religious coping. Conclusion: Recognising and understanding the challenges faced by AISs in terms of disclosure and mental health help-seeking is essential to reduce barriers to formal help-seeking. Informal disclosure among peers is often the first step to seeking help. Enhancing practitioners’ cultural competences and awareness of diverse understandings of mental health and the role of religious coping among AISs’ may have transferable benefits to a wider international student population.Keywords: mental health, stegma, coping, disclosure
Procedia PDF Downloads 1465558 Performance Evaluation of a Very High-Resolution Satellite Telescope
Authors: Walid A. Attia, Taher M. Bazan, Fawzy Eltohamy, Mahmoud Fathy
Abstract:
System performance evaluation is an essential stage in the design of high-resolution satellite telescopes prior to the development process. In this paper, a system performance evaluation of a very high-resolution satellite telescope is investigated. The evaluated system has a Korsch optical scheme design. This design has been discussed in another paper with respect to three-mirror anastigmat (TMA) scheme design and the former configuration showed better results. The investigated system is based on the Korsch optical design integrated with a time-delay and integration charge coupled device (TDI-CCD) sensor to achieve a ground sampling distance (GSD) of 25 cm. The key performance metrics considered are the spatial resolution, the signal to noise ratio (SNR) and the total modulation transfer function (MTF) of the system. In addition, the national image interpretability rating scale (NIIRS) metric is assessed to predict the image quality according to the modified general image quality equation (GIQE). Based on the orbital, optical and detector parameters, the estimated GSD is found to be 25 cm. The SNR has been analyzed at different illumination conditions of target albedos, sun and sensor angles. The system MTF has been computed including diffraction, aberration, optical manufacturing, smear and detector sampling as the main contributors for evaluation the MTF. Finally, the system performance evaluation results show that the computed MTF value is found to be around 0.08 at the Nyquist frequency, the SNR value was found to be 130 at albedo 0.2 with a nadir viewing angles and the predicted NIIRS is in the order of 6.5 which implies a very good system image quality.Keywords: modulation transfer function, national image interpretability rating scale, signal to noise ratio, satellite telescope performance evaluation
Procedia PDF Downloads 3875557 Passing-On Cultural Heritage Knowledge: Entrepreneurial Approaches for a Higher Educational Sustainability
Authors: Ioana Simina Frincu
Abstract:
As institutional initiatives often fail to provide good practices when it comes to heritage management or to adapt to the changing environment in which they function and to the audiences they address, private actions represent viable strategies for sustainable knowledge acquisition. Information dissemination to future generations is one of the key aspects in preserving cultural heritage and is successfully feasible even in the absence of original artifacts. Combined with the (re)discovery of natural landscape, open-air exploratory approaches (archeoparks) versus an enclosed monodisciplinary rigid framework (traditional museums) are more likely to 'speak the language' of a larger number of people, belonging to a variety of categories, ages, and professions. Interactive sites are efficient ways of stimulating heritage awareness and increasing the number of visitors of non-interactive/static cultural institutions owning original pieces of history, delivering specialized information, and making continuous efforts to preserve historical evidence (relics, manuscripts, etc.). It is high time entrepreneurs took over the role of promoting cultural heritage, bet it under a more commercial yet more attractive form (business). Inclusive, participatory type of activities conceived by experts from different domains/fields (history, anthropology, tourism, sociology, business management, integrative sustainability, etc.) have better chances to ensure long term cultural benefits for both adults and children, especially when and where the educational discourse fails. These unique self-experience leisure activities, which offer everyone the opportunity to recreate history by him-/her-self, to relive the ancestors’ way of living, surviving and exploring should be regarded not as pseudo-scientific approaches but as important pre-steps to museum experiences. In order to support this theory, focus will be laid on two different examples: one dynamic, in the outdoors (the Boario Terme Archeopark from Italy) and one experimental, held indoor (the reconstruction of the Neolithic sanctuary of Parta, Romania as part of a transdisciplinary academic course) and their impact on young generations. The conclusion of this study shows that the increasingly lower engagement of youth (students) in discovering and understanding history, archaeology, and heritage can be revived by entrepreneurial projects.Keywords: archeopark, educational tourism, open air museum, Parta sanctuary, prehistory
Procedia PDF Downloads 1435556 Uncontrollable Inaccuracy in Inverse Problems
Authors: Yu Menshikov
Abstract:
In this paper the influence of errors of function derivatives in initial time which have been obtained by experiment (uncontrollable inaccuracy) to the results of inverse problem solution was investigated. It was shown that these errors distort the inverse problem solution as a rule near the beginning of interval where the solution are analyzed. Several methods for remove the influence of uncontrollable inaccuracy have been suggested.Keywords: inverse problems, filtration, uncontrollable inaccuracy
Procedia PDF Downloads 5125555 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 1135554 Modeling and Design of E-mode GaN High Electron Mobility Transistors
Authors: Samson Mil'shtein, Dhawal Asthana, Benjamin Sullivan
Abstract:
The wide energy gap of GaN is the major parameter justifying the design and fabrication of high-power electronic components made of this material. However, the existence of a piezo-electrics in nature sheet charge at the AlGaN/GaN interface complicates the control of carrier injection into the intrinsic channel of GaN HEMTs (High Electron Mobility Transistors). As a result, most of the transistors created as R&D prototypes and all of the designs used for mass production are D-mode devices which introduce challenges in the design of integrated circuits. This research presents the design and modeling of an E-mode GaN HEMT with a very low turn-on voltage. The proposed device includes two critical elements allowing the transistor to achieve zero conductance across the channel when Vg = 0V. This is accomplished through the inclusion of an extremely thin, 2.5nm intrinsic Ga₀.₇₄Al₀.₂₆N spacer layer. The added spacer layer does not create piezoelectric strain but rather elastically follows the variations of the crystal structure of the adjacent GaN channel. The second important factor is the design of a gate metal with a high work function. The use of a metal gate with a work function (Ni in this research) greater than 5.3eV positioned on top of n-type doped (Nd=10¹⁷cm⁻³) Ga₀.₇₄Al₀.₂₆N creates the necessary built-in potential, which controls the injection of electrons into the intrinsic channel as the gate voltage is increased. The 5µm long transistor with a 0.18µm long gate and a channel width of 30µm operate at Vd=10V. At Vg =1V, the device reaches the maximum drain current of 0.6mA, which indicates a high current density. The presented device is operational at frequencies greater than 10GHz and exhibits a stable transconductance over the full range of operational gate voltages.Keywords: compound semiconductors, device modeling, enhancement mode HEMT, gallium nitride
Procedia PDF Downloads 2655553 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1955552 The Decision-Making Mechanisms of Tax Regulations
Authors: Nino Pailodze, Malkhaz Sulashvili, Vladimer Kekenadze, Tea Khutsishvili, Irma Makharashvili, Aleksandre Kekenadze
Abstract:
In the nearest future among the important problems which Georgia has solve the most important is economic stability, that bases on fiscal policy and the proper definition of the its directions. The main source of the Budget revenue is the national income. The State uses taxes, loans and emission in order to create national income, were the principal weapon are taxes. As well as fiscal function of the fulfillment of the budget, tax systems successfully implement economic and social development and the regulatory functions of foreign economic relations. A tax is a mandatory, unconditional monetary payment to the budget made by a taxpayer in accordance with this Code, based on the necessary, nonequivalent and gratuitous character of the payment. Taxes shall be national and local. National taxes shall be the taxes provided for under this Code, the payment of which is mandatory across the whole territory of Georgia. Local taxes shall be the taxes provided for under this Code, introduced by normative acts of local self-government representative authorities (within marginal rates), the payment of which is mandatory within the territory of the relevant self-governing unit. National taxes have the leading role in tax systems, but also the local taxes have an importance role in tax systems. Exactly in the means of local taxes, the most part of the budget is formatted. National taxes shall be: income tax, profit tax, value added tax (VAT), excise tax, import duty, property tax shall be a local tax The property tax is one of the significant taxes in Georgia. The paper deals with the taxation mechanism that has been operated in Georgia. The above mention has the great influence in financial accounting. While comparing foreign legislation towards Georgian legislation we discuss the opportunity of using their experience. Also, we suggested recommendations in order to improve the tax system in financial accounting. In addition to accounting, which is regulated according the International Accounting Standards we have tax accounting, which is regulated by the Tax Code, various legal orders / regulations of the Minister of Finance. The rules are controlled by the tax authority, Revenue Service. The tax burden from the tax values are directly related to expenditures of the state from the emergence of the first day. Fiscal policy of the state is as well as expenditure of the state and decisions of taxation. In order to get the best and the most effective mobilization of funds, Government’s primary task is to decide the kind of taxation rules. Tax function is to reveal the substance of the act. Taxes have the following functions: distribution or the fiscal function; Control and regulatory functions. Foreign tax systems evolved in the different economic, political and social conditions influence. The tax systems differ greatly from each other: taxes, their structure, typing means, rates, the different levels of fiscal authority, the tax base, the tax sphere of action, the tax breaks.Keywords: international accounting standards, financial accounting, tax systems, financial obligations
Procedia PDF Downloads 2455551 Rheological Study of Natural Sediments: Application in Filling of Estuaries
Authors: S. Serhal, Y. Melinge, D. Rangeard, F. Hage Chehadeh
Abstract:
Filling of estuaries is an international problem that can cause economic and environmental damage. This work aims the study of the rheological structuring mechanisms of natural sedimentary liquid-solid mixture in estuaries in order to better understand their filling. The estuary of the Rance river, located in Brittany, France is particularly targeted by the study. The aim is to provide answers on the rheological behavior of natural sediments by detecting structural factors influencing the rheological parameters. So we can better understand the fillings estuarine areas and especially consider sustainable solutions of ‘cleansing’ of these areas. The sediments were collected from the trap of Lyvet in Rance estuary. This trap was created by the association COEUR (Comité Opérationnel des Elus et Usagers de la Rance) in 1996 in order to facilitate the cleansing of the estuary. It creates a privileged area for the deposition of sediments and consequently makes the cleansing of the estuary easier. We began our work with a preliminary study to establish the trend of the rheological behavior of the suspensions and to specify the dormant phase which precedes the beginning of the biochemical reactivity of the suspensions. Then we highlight the visco-plastic character at younger age using the Kinexus rheometer, plate-plate geometry. This rheological behavior of suspensions is represented by the Bingham model using dynamic yield stress and viscosity which can be a function of volume fraction, granular extent, and chemical reactivity. The evolution of the viscosity as a function of the solid volume fraction is modeled by the Krieger-Dougherty model. On the other hand, the analysis of the dynamic yield stress showed a fairly functional link with the solid volume fraction.Keywords: estuaries, rheological behavior, sediments, Kinexus rheometer, Bingham model, viscosity, yield stress
Procedia PDF Downloads 1655550 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 705549 Budgetary Performance Model for Managing Pavement Maintenance
Authors: Vivek Hokam, Vishrut Landge
Abstract:
An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.Keywords: budget, maintenance, deterioration, priority
Procedia PDF Downloads 2105548 DEKA-1 a Dose-Finding Phase 1 Trial: Observing Safety and Biomarkers using DK210 (EGFR) for Inoperable Locally Advanced and/or Metastatic EGFR+ Tumors with Progressive Disease Failing Systemic Therapy
Authors: Spira A., Marabelle A., Kientop D., Moser E., Mumm J.
Abstract:
Background: Both interleukin-2 (IL-2) and interleukin-10 (IL-10) have been extensively studied for their stimulatory function on T cells and their potential to obtain sustainable tumor control in RCC, melanoma, lung, and pancreatic cancer as monotherapy, as well as combination with PD-1 blockers, radiation, and chemotherapy. While approved, IL-2 retains significant toxicity, preventing its widespread use. The significant efforts undertaken to uncouple IL-2 toxicity from its anti-tumor function have been unsuccessful, and early phase clinical safety observed with PEGylated IL-10 was not met in a blinded Phase 3 trial. Deka Biosciences has engineered a novel molecule coupling wild-type IL-2 to a high affinity variant of Epstein Barr Viral (EBV) IL-10 via a scaffold (scFv) that binds to epidermal growth factor receptors (EGFR). This patented molecule, termed DK210 (EGFR), is retained at high levels within the tumor microenvironment for days after dosing. In addition to overlapping and non-redundant anti-tumor function, IL-10 reduces IL-2 mediated cytokine release syndrome risks and inhibits IL-2 mediated T regulatory cell proliferation. Methods: DK210 (EGFR) is being evaluated in an open-label, dose-escalation (Phase 1) study with 5 (0.025-0.3 mg/kg) monotherapy dose levels and (expansion cohorts) in combination with PD-1 blockers, or radiation or chemotherapy in patients with advanced solid tumors overexpressing EGFR. Key eligibility criteria include 1) confirmed progressive disease on at least one line of systemic treatment, 2) EGFR overexpression or amplification documented in histology reports, 3) at least a 4 week or 5 half-lives window since last treatment, and 4) excluding subjects with long QT syndrome, multiple myeloma, multiple sclerosis, myasthenia gravis or uncontrolled infectious, psychiatric, neurologic, or cancer disease. Plasma and tissue samples will be investigated for pharmacodynamic and predictive biomarkers and genetic signatures associated with IFN-gamma secretion, aiming to select subjects for treatment in Phase 2. Conclusion: Through successful coupling of wild-type IL-2 with a high affinity IL-10 and targeting directly to the tumor microenvironment, DK210 (EGFR) has the potential to harness IL-2 and IL-10’s known anti-cancer promise while reducing immunogenicity and toxicity risks enabling safe concomitant cytokine treatment with other anti-cancer modalities.Keywords: cytokine, EGFR over expression, interleukine-2, interleukine-10, clinical trial
Procedia PDF Downloads 905547 Collaborative Online International Learning with Different Learning Goals: A Second Language Curriculum Perspective
Authors: Andrew Nowlan
Abstract:
During the Coronavirus pandemic, collaborative online international learning (COIL) emerged as an alternative to overseas sojourns. However, now that face-to-face classes have resumed and students are studying abroad, the rationale for doing COIL is not always clear amongst educators and students. Also, the logistics of COIL become increasingly complicated when participants involved in a potential collaboration have different second language (L2) learning goals. In this paper, the researcher will report on a study involving two bilingual, cross-cultural COIL courses between students at a university in Japan and those studying in North America, from April to December, 2022. The students in Japan were enrolled in an intercultural communication class in their L2 of English, while the students in Canada and the United States were studying intermediate Japanese as their L2. Based on a qualitative survey and journaling data received from 31 students in Japan, and employing a transcendental phenomenological research design, the researcher will highlight the students’ essence of experience during COIL. Essentially, students benefited from the experience through improved communicative competences and increased knowledge of the target culture, even when the L2 learning goals between institutions differed. Students also reported that the COIL experience was effective in preparation for actual study abroad, as opposed to a replacement for it, which challenges the existing literature. Both educators and administrators will be exposed to the perceptions of Japanese university students towards COIL, which could be generalized to other higher education contexts, including those in Southeast Asia. Readers will also be exposed to ideas for developing more effective pre-departure study abroad programs and domestic intercultural curriculum through COIL, even when L2 learning goals may differ between participants.Keywords: collaborative online international learning, study abroad, phenomenology, EdTech, intercultural communication
Procedia PDF Downloads 855546 Microbiota Effect with Cytokine in Hl and NHL Patient Group
Authors: Ekin Ece Gürer, Tarık Onur Tiryaki, Sevgi Kalayoğlu Beşışık, Fatma Savran Oğuz, Uğur Sezerman, Fatma Erdem, Gülşen Günel, Dürdane Serap Kuruca, Zerrin Aktaş, Oral Öncül
Abstract:
Aim: Chemotherapytreatment in HodgkinLymphomaandNon-HodgkinLymphoma (NHL) diseasescausesgastrointestinalepithelialdamage, disruptstheintestinalmicrobiotabalanceandcausesdysbiosis. Inourstudy, it wasaimedtoshowtheeffect of thedamagecausedbychemotherapy on themicrobiotaandtheeffect of thechangingmicrobiota flora on thecourse of thedisease. Materials And Methods: Seven adult HL and seven adult HL patients to be treatedwithchemotherapywereincluded in the study. Stoolsamplesweretakentwice, beforechemotherapytreatmentandafterthe 3th course of treatment. SamplesweresequencedusingNextGenerationSequencing (NGS) methodafternucleicacidisolation. OTU tableswerepreparedusing NCBI blastnversion 2.0.12 accordingtothe NCBI general 16S bacterialtaxonomyreferencedated 10.08.2021. Thegenerated OTU tableswerecalculatedwith R Statistical Computer Language version 4.04 (readr, phyloseq, microbiome, vegan, descrand ggplot2 packages) to calculate Alpha diversityandtheirgraphicswerecreated. Statistical analyzeswerealsoperformedusing R Statistical Computer Language version 4.0.4 and studio IDE 1.4 (tidyverse, readr, xlsxand ggplot2 packages). Expression of IL-12 and IL-17 cytokineswasperformedbyrtPCRtwice, beforeandaftertreatment. Results: InHL patients, a significantdecreasewasobserved in themicrobiota flora of Ruminococcaceae_UCG-014 genus (p:0.036) andUndefined Ruminococcaceae_UCG-014 species (p:0.036) comparedtopre-treatment. When the post-treatment of HL patientswerecomparedwithhealthycontrols, a significantdecreasewasfound in themicrobiota of Prevotella_7 genus (p:0.049) andButyricimonas (p:0.006) in the post-treatmentmicrobiota of HL patients. InNHL patients, a significantdecreasewasobserved in themicrobiota flora of Coprococccus_3 genus (p:0.015) andUndefined Ruminoclostridium_5 (p:0.046) speciescomparedtopre-treatment. When post-treatment of NHL patientswerecomparedwithhealthycontrols, a significantabundance in theBacilliclass (p:0.029) and a significantdecrease in theUndefinedAlistipesspecies (p:0.047) wereobserved in the post-treatmentmicrobiota of NHL patients. While a decreasewasobserved in IL-12 cytokineexpressionuntilbeforetreatment, an increase in IL-17 cytokineexpressionwasdetected. Discussion: Intestinal flora monitoringafterchemotherapytreatmentshowsthat it can be a guide in thetreatment of thedisease. It is thoughtthatincreasingthediversity of commensalbacteria can alsopositivelyaffecttheprognosis of thedisease.Keywords: hodgkin lymphoma, non-hodgkin, microbiota, cytokines
Procedia PDF Downloads 1135545 Statistical Convergence for the Approximation of Linear Positive Operators
Authors: Neha Bhardwaj
Abstract:
In this paper, we consider positive linear operators and study the Voronovskaya type result of the operator then obtain an error estimate in terms of the higher order modulus of continuity of the function being approximated and its A-statistical convergence. Also, we compute the corresponding rate of A-statistical convergence for the linear positive operators.Keywords: Poisson distribution, Voronovskaya, modulus of continuity, a-statistical convergence
Procedia PDF Downloads 3375544 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 1275543 A Hybrid Block Multistep Method for Direct Numerical Integration of Fourth Order Initial Value Problems
Authors: Adamu S. Salawu, Ibrahim O. Isah
Abstract:
Direct solution to several forms of fourth-order ordinary differential equations is not easily obtained without first reducing them to a system of first-order equations. Thus, numerical methods are being developed with the underlying techniques in the literature, which seeks to approximate some classes of fourth-order initial value problems with admissible error bounds. Multistep methods present a great advantage of the ease of implementation but with a setback of several functions evaluation for every stage of implementation. However, hybrid methods conventionally show a slightly higher order of truncation for any k-step linear multistep method, with the possibility of obtaining solutions at off mesh points within the interval of solution. In the light of the foregoing, we propose the continuous form of a hybrid multistep method with Chebyshev polynomial as a basis function for the numerical integration of fourth-order initial value problems of ordinary differential equations. The basis function is interpolated and collocated at some points on the interval [0, 2] to yield a system of equations, which is solved to obtain the unknowns of the approximating polynomial. The continuous form obtained, its first and second derivatives are evaluated at carefully chosen points to obtain the proposed block method needed to directly approximate fourth-order initial value problems. The method is analyzed for convergence. Implementation of the method is done by conducting numerical experiments on some test problems. The outcome of the implementation of the method suggests that the method performs well on problems with oscillatory or trigonometric terms since the approximations at several points on the solution domain did not deviate too far from the theoretical solutions. The method also shows better performance compared with an existing hybrid method when implemented on a larger interval of solution.Keywords: Chebyshev polynomial, collocation, hybrid multistep method, initial value problems, interpolation
Procedia PDF Downloads 1295542 Redefining “Minor”: An Empirical Research on Two Biennials in Contemporary China
Authors: Mengwei Li
Abstract:
Since the 1990s, biennials, and large-scale transnational art exhibitions, have proliferated exponentially across the globe, particularly in Asia, Africa, and Latin America. It has spurred debates regarding the inclusion of "new art cultures" and the deconstruction of the mechanism of exclusion embedded in the Western monopoly on art. Hans Belting introduced the concept of "global art" in 2013 to denounce the West's privileged canons in art by emphasising the inclusion of art practices from alleged non-Western regions. Arguably, the rise of new biennial networks developed by these locations has contributed to the asserted "inclusion of new art worlds." However, phrases such as "non-Western" and "beyond Euro-American" attached to these discussions raise the question of non- or beyond- in relation to whom. In this narrative, to become "integrated" and "equal" implies entry into the "core," a universal system in which preexisting authoritative voices define "newcomers" by what they are not. Possibly, if there is a global biennial system that symbolises a "universal language" of the contemporary art world, it is centered on the inherently dynamic yet asymmetrical interaction and negotiation between the "core" and the rest of the world's "periphery." Engaging with theories of "minor literature" developed by Deleuze and Guattari, this research proposes an epistemological framework to comprehend the global biennial discourse since the 1990s. Using this framework, this research looks at two biennial models in China: the 13th Shanghai Biennale, which was organised in the country's metropolitan art centre, and the 2nd Yinchuan Biennale, which was inaugurated in a geographically and economically marginalised city compared to domestic centres. By analysing how these two biennials from different locations in China positioned themselves and conveyed their local profiles through the universal language of the biennial, this research identifies a potential "minor" positionality within the global biennial discourse from China's perspective.Keywords: biennials, China, contemporary, global art, minor literature
Procedia PDF Downloads 915541 Burn/Traumatic Scar Maturation Using Autologous Fat Grafts + SVF
Authors: Ashok K. Gupta
Abstract:
Over the past few decades, since the bio-engineering revolution, autologous cell therapy (ACT) has become a rapidly evolving field. Currently, this form of therapy has broad applications in modern medicine and plastic surgery, ranging from the treatment/improvement of wound healing to life-saving operations. A study was conducted on 50 patients having to disfigure, and deform post burn scars and was treated by injection of extracted, refined adipose tissue grafts with their unique stem cell properties. To compare the outcome, a control of 20 such patients was treated with conventional skin or soft-tissue flaps or skin grafting, and a control of 10 was treated with more advanced microsurgical techniques such as Pre-fabricated flaps/pre laminated flaps / free flaps. Assessment of fat volume and survival post- follow up period was done by radiological aid, using MRI and clinically (Survival of the autograft and objective parameters for scar elasticity were evaluated skin elasticity parameters 3 to 9 months postoperatively). Recently, an enzyme that is involved in collagen crosslinking in fibrotic tissue, lysyl hydroxylase (LH2), was identified. This enzyme is normally active in bone and cartilage but hardly in the skin. It has been found that this enzyme is highly expressed in scar tissue and subcutaneous fat; this is in contrast to the dermis, where the enzyme is hardly expressed. Adipose tissue-derived stem cell injections are an effective method in the treatment of various extensive post-burn scar deformities that makes it possible to re-create the lost sub-dermal tissue for improvement in the function of involved joint movements.Keywords: adipose tissue-derived stem cell injections, treatment of various extensive post-burn scar deformities, re-create the lost sub-dermal tissue, improvement in function of involved joint movements
Procedia PDF Downloads 695540 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 126