Search results for: cartilage integrity parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2820

Search results for: cartilage integrity parameter

480 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process

Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei

Abstract:

Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.

Keywords: Iranian processed cheese, emulsifying salt, rheology, texture

Procedia PDF Downloads 426
479 ‘Only Amharic or Leave Quick!’: Linguistic Genocide in the Western Tigray Region of Ethiopia

Authors: Merih Welay Welesilassie

Abstract:

Language is a potent instrument that does not only serve the purpose of communication but also plays a pivotal role in shaping our cultural practices and identities. The right to choose one's language is a fundamental human right that helps to safeguard the integrity of both personal and communal identities. Language holds immense significance in Ethiopia, a nation with a diverse linguistic landscape that extends beyond mere communication to delineate administrative boundaries. Consequently, depriving Ethiopians of their linguistic rights represents a multifaceted punishment, more complex than food embargoes. In the aftermath of the civil war that shook Ethiopia in November 2020, displacing millions and resulting in the loss of hundreds of thousands of lives, concerns have been raised about the preservation of the indigenous Tigrayan language and culture. This is particularly true following the annexation of western Tigray into the Amhara region and the implementation of an Amharic-only language and culture education policy. This scholarly inquiry explores the intricacies surrounding the Amhara regional state's prohibition of Tigrayans' indigenous language and culture and the subsequent adoption of a monolingual and monocultural Amhara language and culture in western Tigray. The study adopts the linguistic genocide conceptual framework as an analytical tool to gain a deeper insight into the factors that contributed to and facilitated this significant linguistic and cultural shift. The research was conducted by interviewing ten teachers selected through a snowball sampling. Additionally, document analysis was performed to support the findings. The findings revealed that the push for linguistic and cultural assimilation was driven by various political and economic factors and the desire to promote a single language and culture policy. This process, often referred to as ‘Amharanization,’ aimed to homogenize the culture and language of the society. The Amhara authorities have enacted several measures in pursuit of their objectives, including the outlawing of the Tigrigna language, punishment for speaking Tigrigna, imposition of the Amhara language and culture, mandatory relocation, and even committing heinous acts that have inflicted immense physical and emotional suffering upon members of the Tigrayan community. Upon conducting a comprehensive analysis of the contextual factors, actions, intentions, and consequences, it has been posited that there may be instances of linguistic genocide taking place in the Western Tigray region. The present study sheds light on the severe consequences that could arise because of implementing monolingual and monocultural policies in multilingual areas. Through thoroughly scrutinizing the implications of such policies, this study provides insightful recommendations and directions for future research in this critical area.

Keywords: linguistic genocide, linguistic human right, mother tongue, Western Tigray

Procedia PDF Downloads 50
478 Multiple Version of Roman Domination in Graphs

Authors: J. C. Valenzuela-Tripodoro, P. Álvarez-Ruíz, M. A. Mateos-Camacho, M. Cera

Abstract:

In 2004, it was introduced the concept of Roman domination in graphs. This concept was initially inspired and related to the defensive strategy of the Roman Empire. An undefended place is a city so that no legions are established on it, whereas a strong place is a city in which two legions are deployed. This situation may be modeled by labeling the vertices of a finite simple graph with labels {0, 1, 2}, satisfying the condition that any 0-vertex must be adjacent to, at least, a 2-vertex. Roman domination in graphs is a variant of classic domination. Clearly, the main aim is to obtain such labeling of the vertices of the graph with minimum cost, that is to say, having minimum weight (sum of all vertex labels). Formally, a function f: V (G) → {0, 1, 2} is a Roman dominating function (RDF) in the graph G = (V, E) if f(u) = 0 implies that f(v) = 2 for, at least, a vertex v which is adjacent to u. The weight of an RDF is the positive integer w(f)= ∑_(v∈V)▒〖f(v)〗. The Roman domination number, γ_R (G), is the minimum weight among all the Roman dominating functions? Obviously, the set of vertices with a positive label under an RDF f is a dominating set in the graph, and hence γ(G)≤γ_R (G). In this work, we start the study of a generalization of RDF in which we consider that any undefended place should be defended from a sudden attack by, at least, k legions. These legions can be deployed in the city or in any of its neighbours. A function f: V → {0, 1, . . . , k + 1} such that f(N[u]) ≥ k + |AN(u)| for all vertex u with f(u) < k, where AN(u) represents the set of active neighbours (i.e., with a positive label) of vertex u, is called a [k]-multiple Roman dominating functions and it is denoted by [k]-MRDF. The minimum weight of a [k]-MRDF in the graph G is the [k]-multiple Roman domination number ([k]-MRDN) of G, denoted by γ_[kR] (G). First, we prove that the [k]-multiple Roman domination decision problem is NP-complete even when restricted to bipartite and chordal graphs. A problem that had been resolved for other variants and wanted to be generalized. We know the difficulty of calculating the exact value of the [k]-MRD number, even for families of particular graphs. Here, we present several upper and lower bounds for the [k]-MRD number that permits us to estimate it with as much precision as possible. Finally, some graphs with the exact value of this parameter are characterized.

Keywords: multiple roman domination function, decision problem np-complete, bounds, exact values

Procedia PDF Downloads 99
477 Oil-Oil Correlation Using Polar and Non-Polar Fractions of Crude Oil: A Case Study in Iranian Oil Fields

Authors: Morteza Taherinezhad, Ahmad Reza Rabbani, Morteza Asemani, Rudy Swennen

Abstract:

Oil-oil correlation is one of the most important issues in geochemical studies that enables to classify oils genetically. Oil-oil correlation is generally estimated based on non-polar fractions of crude oil (e.g., saturate and aromatic compounds). Despite several advantages, the drawback of using these compounds is their susceptibility of being affected by secondary processes. The polar fraction of crude oil (e.g., asphaltenes) has similar characteristics to kerogen, and this structural similarity is preserved during migration, thermal maturation, biodegradation, and water washing. Therefore, these structural characteristics can be considered as a useful correlation parameter, and it can be concluded that asphaltenes from different reservoirs with the same genetic signatures have a similar origin. Hence in this contribution, an integrated study by using both non-polar and polar fractions of oil was performed to use the merits of both fractions. Therefore, five oil samples from oil fields in the Persian Gulf were studied. Structural characteristics of extracted asphaltenes were investigated by Fourier transform infrared (FTIR) spectroscopy. Graphs based on aliphatic and aromatic compounds (predominant compounds in asphaltenes structure) and sulphoxide and carbonyl functional groups (which are representatives of sulphur and oxygen abundance in asphaltenes) were used for comparison of asphaltenes structures in different samples. Non-polar fractions were analyzed by GC-MS. The study of asphaltenes showed the studied oil samples comprise two oil families with distinct genetic characteristics. The first oil family consists of Salman and Reshadat oil samples, and the second oil family consists of Resalat, Siri E, and Siri D oil samples. To validate our results, biomarker parameters were employed, and this approach completely confirmed previous results. Based on biomarker analyses, both oil families have a marine source rock, whereby marl and carbonate source rocks are the source rock for the first and the second oil family, respectively.

Keywords: biomarker, non-polar fraction, oil-oil correlation, petroleum geochemistry, polar fraction

Procedia PDF Downloads 126
476 Fem Models of Glued Laminated Timber Beams Enhanced by Bayesian Updating of Elastic Moduli

Authors: L. Melzerová, T. Janda, M. Šejnoha, J. Šejnoha

Abstract:

Two finite element (FEM) models are presented in this paper to address the random nature of the response of glued timber structures made of wood segments with variable elastic moduli evaluated from 3600 indentation measurements. This total database served to create the same number of ensembles as was the number of segments in the tested beam. Statistics of these ensembles were then assigned to given segments of beams and the Latin Hypercube Sampling (LHS) method was called to perform 100 simulations resulting into the ensemble of 100 deflections subjected to statistical evaluation. Here, a detailed geometrical arrangement of individual segments in the laminated beam was considered in the construction of two-dimensional FEM model subjected to in four-point bending to comply with the laboratory tests. Since laboratory measurements of local elastic moduli may in general suffer from a significant experimental error, it appears advantageous to exploit the full scale measurements of timber beams, i.e. deflections, to improve their prior distributions with the help of the Bayesian statistical method. This, however, requires an efficient computational model when simulating the laboratory tests numerically. To this end, a simplified model based on Mindlin’s beam theory was established. The improved posterior distributions show that the most significant change of the Young’s modulus distribution takes place in laminae in the most strained zones, i.e. in the top and bottom layers within the beam center region. Posterior distributions of moduli of elasticity were subsequently utilized in the 2D FEM model and compared with the original simulations.

Keywords: Bayesian inference, FEM, four point bending test, laminated timber, parameter estimation, prior and posterior distribution, Young’s modulus

Procedia PDF Downloads 274
475 Estimating Water Balance at Beterou Watershed, Benin Using Soil and Water Assessment Tool (SWAT) Model

Authors: Ella Sèdé Maforikan

Abstract:

Sustained water management requires quantitative information and the knowledge of spatiotemporal dynamics of hydrological system within the basin. This can be achieved through the research. Several studies have investigated both surface water and groundwater in Beterou catchment. However, there are few published papers on the application of the SWAT modeling in Beterou catchment. The objective of this study was to evaluate the performance of SWAT to simulate the water balance within the watershed. The inputs data consist of digital elevation model, land use maps, soil map, climatic data and discharge records. The model was calibrated and validated using the Sequential Uncertainty Fitting (SUFI2) approach. The calibrated started from 1989 to 2006 with four years warming up period (1985-1988); and validation was from 2007 to 2020. The goodness of the model was assessed using five indices, i.e., Nash–Sutcliffe efficiency (NSE), the ratio of the root means square error to the standard deviation of measured data (RSR), percent bias (PBIAS), the coefficient of determination (R²), and Kling Gupta efficiency (KGE). Results showed that SWAT model successfully simulated river flow in Beterou catchment with NSE = 0.79, R2 = 0.80 and KGE= 0.83 for the calibration process against validation process that provides NSE = 0.78, R2 = 0.78 and KGE= 0.85 using site-based streamflow data. The relative error (PBIAS) ranges from -12.2% to 3.1%. The parameters runoff curve number (CN2), Moist Bulk Density (SOL_BD), Base Flow Alpha Factor (ALPHA_BF), and the available water capacity of the soil layer (SOL_AWC) were the most sensitive parameter. The study provides further research with uncertainty analysis and recommendations for model improvement and provision of an efficient means to improve rainfall and discharges measurement data.

Keywords: watershed, water balance, SWAT modeling, Beterou

Procedia PDF Downloads 47
474 Influence of Hydrophobic Surface on Flow Past Square Cylinder

Authors: S. Ajith Kumar, Vaisakh S. Rajan

Abstract:

In external flows, vortex shedding behind the bluff bodies causes to experience unsteady loads on a large number of engineering structures, resulting in structural failure. Vortex shedding can even turn out to be disastrous like the Tacoma Bridge failure incident. We need to have control over vortex shedding to get rid of this untoward condition by reducing the unsteady forces acting on the bluff body. In circular cylinders, hydrophobic surface in an otherwise no-slip surface is found to be delaying separation and minimizes the effects of vortex shedding drastically. Flow over square cylinder stands different from this behavior as separation can takes place from either of the two corner separation points (front or rear). An attempt is made in this study to numerically elucidate the effect of hydrophobic surface in flow over a square cylinder. A 2D numerical simulation has been done to understand the effects of the slip surface on the flow past square cylinder. The details of the numerical algorithm will be presented at the time of the conference. A non-dimensional parameter, Knudsen number is defined to quantify the slip on the cylinder surface based on Maxwell’s equation. The slip surface condition of the wall affects the vorticity distribution around the cylinder and the flow separation. In the numerical analysis, we observed that the hydrophobic surface enhances the shedding frequency and damps down the amplitude of oscillations of the square cylinder. We also found that the slip has a negative effect on aerodynamic force coefficients such as the coefficient of lift (CL), coefficient of drag (CD) etc. and hence replacing the no slip surface by a hydrophobic surface can be treated as an effective drag reduction strategy and the introduction of hydrophobic surface could be utilized for reducing the vortex induced vibrations (VIV) and is found as an effective method in controlling VIV thereby controlling the structural failures.

Keywords: drag reduction, flow past square cylinder, flow control, hydrophobic surfaces, vortex shedding

Procedia PDF Downloads 367
473 The Rise of Blue Water Navy and its Implication for the Region

Authors: Riddhi Chopra

Abstract:

Alfred Thayer Mahan described the sea as a ‘great common,’ which would serve as a medium for communication, trade, and transport. The seas of Asia are witnessing an intriguing historical anomaly – rise of an indigenous maritime power against the backdrop of US domination over the region. As China transforms from an inward leaning economy to an outward-leaning economy, it has become increasingly dependent on the global sea; as a result, we witness an evolution in its maritime strategy from near seas defense to far seas deployment strategies. It is not only patrolling the international waters but has also built a network of civilian and military infrastructure across the disputed oceanic expanse. The paper analyses the reorientation of China from a naval power to a blue water navy in an era of extensive globalisation. The actions of the Chinese have created a zone of high alert amongst its neighbors such as Japan, Philippines, Vietnam and North Korea. These nations are trying to align themselves so as to counter China’s growing brinkmanship, but China has been pursuing claims through a carefully calibrated strategy in the region shunning any coercive measures taken by other forces. If China continues to expand its maritime boundaries, its neighbors – all smaller and weaker Asian nations would be limited to a narrow band of the sea along its coastlines. Hence it is essential for the US to intervene and support its allies to offset Chinese supremacy. The paper intends to provide a profound analysis over the disputes in South China Sea and East China Sea focusing on Philippines and Japan respectively. Moreover, the paper attempts to give an account of US involvement in the region and its alignment with its South Asian allies. The geographic dynamics is said the breed a national coalition dominating the strategic ambitions of China as well as the weak littoral states. China has conducted behind the scenes diplomacy trying to persuade its neighbors to support its position on the territorial disputes. These efforts have been successful in creating fault lines in ASEAN thereby undermining regional integrity to reach a consensus on the issue. Chinese diplomatic efforts have also forced the US to revisit its foreign policy and engage with players like Cambodia and Laos. The current scenario in the SCS points to a strong Chinese hold trying to outspace all others with no regards to International law. Chinese activities are in contrast with US principles like Freedom of Navigation thereby signaling US to take bold actions to prevent Chinese hegemony in the region. The paper ultimately seeks to explore the changing power dynamics among various claimants where a rival superpower like US can pursue the traditional policy of alliance formation play a decisive role in changing the status quo in the arena, consequently determining the future trajectory.

Keywords: China, East China Sea, South China Sea, USA

Procedia PDF Downloads 233
472 Batch and Fixed-Bed Studies of Ammonia Treated Coconut Shell Activated Carbon for Adsorption of Benzene and Toluene

Authors: Jibril Mohammed, Usman Dadum Hamza, Muhammad Idris Misau, Baba Yahya Danjuma, Yusuf Bode Raji, Abdulsalam Surajudeen

Abstract:

Volatile organic compounds (VOCs) have been reported to be responsible for many acute and chronic health effects and environmental degradations such as global warming. In this study, a renewable and low-cost coconut shell activated carbon (PHAC) was synthesized and treated with ammonia (PHAC-AM) to improve its hydrophobicity and affinity towards VOCs. Removal efficiencies and adsorption capacities of the ammonia treated activated carbon (PHAC-AM) for benzene and toluene were carried out through batch and fixed-bed studies respectively. Langmuir, Freundlich and Tempkin adsorption isotherms were tested for the adsorption process and the experimental data were best fitted by Langmuir model and least fitted by Tempkin model; the favourability and suitability of fitness were validated by equilibrium parameter (RL) and the root square mean deviation (RSMD). Judging by the deviation of the predicted values from the experimental values, pseudo-second-order kinetic model best described the adsorption kinetics than the pseudo-first-order kinetic model for the two VOCs on PHAC and PHAC-AM. In the fixed-bed study, the effect of initial VOC concentration, bed height and flow rate on benzene and toluene adsorption were studied. The highest bed capacities of 77.30 and 69.40 mg/g were recorded for benzene and toluene respectively; at 250 mg/l initial VOC concentration, 2.5 cm bed height and 4.5 ml/min flow rate. The results of this study revealed that ammonia treated activate carbon (PHAC-AM) is a sustainable adsorbent for treatment of VOCs in polluted waters.

Keywords: volatile organic compounds, equilibrium and kinetics studies, batch and fixed bed study, bio-based activated carbon

Procedia PDF Downloads 217
471 Optimisation of Dyes Decolourisation by Bacillus aryabhattai

Authors: A. Paz, S. Cortés Diéguez, J. M. Cruz, A. B. Moldes, J. M. Domínguez

Abstract:

Synthetic dyes are extensively used in the paper, food, leather, cosmetics, pharmaceutical and textile industries. Wastewater resulting from their production means several environmental problems. Improper disposal of theirs effluents involves adverse impacts and not only about the colour, also on water quality (Total Organic Carbon, Biological Oxygen Demand, Chemical Oxygen Demand, suspended solids, salinity, etc.) on flora (inhibition of photosynthetic activity), fauna (toxic, carcinogenic, and mutagenic effects) and human health. The aim of this work is to optimize the decolourisation process of different types of dyes by Bacillus aryabhattai. Initially, different types of dyes (Indigo Carmine, Coomassie Brilliant Blue and Remazol Brilliant Blue R) and suitable culture media (Nutritive Broth, Luria Bertani Broth and Trypticasein Soy Broth) were selected. Then, a central composite design (CCD) was employed to optimise and analyse the significance of each abiotic parameter. Three process variables (temperature, salt concentration and agitation) were investigated in the CCD at 3 levels with 2-star points. A total of 23 experiments were carried out according to a full factorial design, consisting of 8 factorial experiments (coded to the usual ± 1 notation), 6 axial experiments (on the axis at a distance of ± α from the centre), and 9 replicates (at the centre of the experimental domain). Experiments results suggest the efficiency of this strain to remove the tested dyes on the 3 media studied, although Trypticasein Soy Broth (TSB) was the most suitable medium. Indigo Carmine and Coomassie Brilliant Blue at maximal tested concentration 150 mg/l were completely decolourised, meanwhile, an acceptable removal was observed using the more complicate dye Remazol Brilliant Blue R at a concentration of 50 mg/l.

Keywords: Bacillus aryabhattai, dyes, decolourisation, central composite design

Procedia PDF Downloads 214
470 Analyzing Concrete Structures by Using Laser Induced Breakdown Spectroscopy

Authors: Nina Sankat, Gerd Wilsch, Cassian Gottlieb, Steven Millar, Tobias Guenther

Abstract:

Laser-Induced Breakdown Spectroscopy (LIBS) is a combination of laser ablation and optical emission spectroscopy, which in principle can simultaneously analyze all elements on the periodic table. Materials can be analyzed in terms of chemical composition in a two-dimensional, time efficient and minor destructive manner. These advantages predestine LIBS as a monitoring technique in the field of civil engineering. The decreasing service life of concrete infrastructures is a continuously growing problematic. A variety of intruding, harmful substances can damage the reinforcement or the concrete itself. To insure a sufficient service life a regular monitoring of the structure is necessary. LIBS offers many applications to accomplish a successful examination of the conditions of concrete structures. A selection of those applications are the 2D-evaluation of chlorine-, sodium- and sulfur-concentration, the identification of carbonation depths and the representation of the heterogeneity of concrete. LIBS obtains this information by using a pulsed laser with a short pulse length (some mJ), which is focused on the surfaces of the analyzed specimen, for this only an optical access is needed. Because of the high power density (some GW/cm²) a minimal amount of material is vaporized and transformed into a plasma. This plasma emits light depending on the chemical composition of the vaporized material. By analyzing the emitted light, information for every measurement point is gained. The chemical composition of the scanned area is visualized in a 2D-map with spatial resolutions up to 0.1 mm x 0.1 mm. Those 2D-maps can be converted into classic depth profiles, as typically seen for the results of chloride concentration provided by chemical analysis like potentiometric titration. However, the 2D-visualization offers many advantages like illustrating chlorine carrying cracks, direct imaging of the carbonation depth and in general allowing the separation of the aggregates from the cement paste. By calibrating the LIBS-System, not only qualitative but quantitative results can be obtained. Those quantitative results can also be based on the cement paste, while excluding the aggregates. An additional advantage of LIBS is its mobility. By using the mobile system, located at BAM, onsite measurements are feasible. The mobile LIBS-system was already used to obtain chloride, sodium and sulfur concentrations onsite of parking decks, bridges and sewage treatment plants even under hard conditions like ongoing construction work or rough weather. All those prospects make LIBS a promising method to secure the integrity of infrastructures in a sustainable manner.

Keywords: concrete, damage assessment, harmful substances, LIBS

Procedia PDF Downloads 171
469 Motor Speech Profile of Marathi Speaking Adults and Children

Authors: Anindita Banik, Anjali Kant, Aninda Duti Banik, Arun Banik

Abstract:

Speech is a complex, dynamic unique motor activity through which we express thoughts and emotions and respond to and control our environment. The aim was based to compare select Motor Speech parameters and their sub parameters across typical Marathi speaking adults and children. The subjects included a total of 300 divided into Group I, II, III including males and females. Subjects included were reported of no significant medical history and had a rating of 0-1 on GRBAS scale. The recordings were obtained utilizing three stimuli for the acoustic analysis of Diadochokinetic rate (DDK), Second Formant Transition, Voice and Tremor and its sub parameters. And these aforementioned parameters were acoustically analyzed in Motor Speech Profile software in VisiPitch IV. The statistical analyses were done by applying descriptive statistics and Two- Way ANOVA.The results obtained showed statistically significant difference across age groups and gender for the aforementioned parameters and its sub parameters.In DDK, for avp (ms) there was a significant difference only across age groups. However, for avr (/s) there was a significant difference across age groups and gender. It was observed that there was an increase in rate with an increase in age groups. The second formant transition sub parameter F2 magn (Hz) also showed a statistically significant difference across both age groups and gender. There was an increase in mean value with an increase in age. Females had a higher mean when compared to males. For F2 rate (/s) a statistically significant difference was observed across age groups. There was an increase in mean value with increase in age. It was observed for Voice and Tremor MFTR (%) that a statistically significant difference was present across age groups and gender. Also for RATR (Hz) there was statistically significant difference across both age groups and gender. In other words, the values of MFTR and RATR increased with an increase in age. Thus, this study highlights the variation of the motor speech parameters amongst the typical population which would be beneficial for comparison with the individuals with motor speech disorders for assessment and management.

Keywords: adult, children, diadochokinetic rate, second formant transition, tremor, voice

Procedia PDF Downloads 294
468 Adequacy of Advanced Earthquake Intensity Measures for Estimation of Damage under Seismic Excitation with Arbitrary Orientation

Authors: Konstantinos G. Kostinakis, Manthos K. Papadopoulos, Asimina M. Athanatopoulou

Abstract:

An important area of research in seismic risk analysis is the evaluation of expected seismic damage of structures under a specific earthquake ground motion. Several conventional intensity measures of ground motion have been used to estimate their damage potential to structures. Yet, none of them was proved to be able to predict adequately the seismic damage of any structural system. Therefore, alternative advanced intensity measures which take into account not only ground motion characteristics but also structural information have been proposed. The adequacy of a number of advanced earthquake intensity measures in prediction of structural damage of 3D R/C buildings under seismic excitation which attacks the building with arbitrary incident angle is investigated in the present paper. To achieve this purpose, a symmetric in plan and an asymmetric 5-story R/C building are studied. The two buildings are subjected to 20 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along horizontal orthogonal axes forming 72 different angles with the structural axes. The response is computed by non-linear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures determined for incident angle 0° as well as their maximum values over all seismic incident angles are correlated with 9 structure-specific ground motion intensity measures. The research identified certain intensity measures which exhibited strong correlation with the seismic damage of the two buildings. However, their adequacy for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.

Keywords: damage indices, non-linear response, seismic excitation angle, structure-specific intensity measures

Procedia PDF Downloads 489
467 Fabrication of Coatable Polarizer by Guest-Host System for Flexible Display Applications

Authors: Rui He, Seung-Eun Baik, Min-Jae Lee, Myong-Hoon Lee

Abstract:

The polarizer is one of the most essential optical elements in LCDs. Currently, the most widely used polarizers for LCD is the derivatives of the H-sheet polarizer. There is a need for coatable polarizers which are much thinner and more stable than H-sheet polarizers. One possible approach to obtain thin, stable, and coatable polarizers is based on the use of highly ordered guest-host system. In our research, we aimed to fabricate coatable polarizer based on highly ordered liquid crystalline monomer and dichroic dye ‘guest-host’ system, in which the anisotropic absorption of light could be achieved by aligning a dichroic dye (guest) in the cooperative motion of the ordered liquid crystal (host) molecules. Firstly, we designed and synthesized a new reactive liquid crystalline monomer containing polymerizable acrylate groups as the ‘host’ material. The structure was confirmed by 1H-NMR and IR spectroscopy. The liquid crystalline behavior was studied by differential scanning calorimetry (DSC) and polarized optical microscopy (POM). It was confirmed that the monomers possess highly ordered smectic phase at relatively low temperature. Then, the photocurable ‘guest-host’ system was prepared by mixing the liquid crystalline monomer, dichroic dye and photoinitiator. Coatable polarizers were fabricated by spin-coating above mixture on a substrate with alignment layer. The in-situ photopolymerization was carried out at room temperature by irradiating UV light, resulting in the formation of crosslinked structure that stabilized the aligned dichroic dye molecules. Finally, the dichroic ratio (DR), order parameter (S) and polarization efficiency (PE) were determined by polarized UV/Vis spectroscopy. We prepared the coatable polarizers by using different type of dichroic dyes to meet the requirement of display application. The results reveal that the coatable polarizers at a thickness of 8μm exhibited DR=12~17 and relatively high PE (>96%) with the highest PE=99.3%, which possess potential for the LCD or flexible display applications.

Keywords: coatable polarizer, display, guest-host, liquid crystal

Procedia PDF Downloads 247
466 Good Functional Outcome after Late Surgical Treatment for Traumatic Rotator Cuff Tear, a Retrospective Cohort Study

Authors: Soheila Zhaeentan, Anders Von Heijne, Elisabet Hagert, André Stark, Björn Salomonsson

Abstract:

Recommended treatment for traumatic rotator cuff tear (TRCT) is surgery within a few weeks after injury if the diagnosis is made early, especially if a functional impairment of the shoulder exists. This may lead to the assumption that a poor outcome then can be expected in delayed surgical treatment, when the patient is diagnosed at a later stage. The aim of this study was to investigate if a surgical repair later than three months after injury may result in successful outcomes and patient satisfaction. There is evidence in literature that good results of treatment can be expected up to three months after the injury, but little is known of later treatment with cuff repair. 73 patients (75 shoulders), 58 males/17 females, mean age 59 (range 34-­‐72), who had undergone surgical intervention for TRCT between January 1999 to December 2011 at our clinic, were included in this study. Patients were assessed by MRI investigation, clinical examination, Western Ontario Rotator Cuff index (WORC), Oxford Shoulder Score, Constant-­‐Murley Score, EQ-­‐5D and patient subjective satisfaction at follow-­‐up. The patients treated surgically within three months ( < 12 weeks) after injury (39 cases) were compared with patients treated more than three months ( ≥ 12 weeks) after injury (36 cases). WORC was used as the primary outcome measure and the other variables as secondary. A senior consultant radiologist, blinded to patient category and clinical outcome, evaluated all MRI-­‐images. Rotator cuff integrity, presence of arthritis, fatty degeneration and muscle atrophy was evaluated in all cases. The average follow-­‐up time was 56 months (range 14-­‐149) and the average time from injury to repair was 16 weeks (range 3-­‐104). No statistically significant differences were found for any of the assessed parameters or scores between the two groups. The mean WORC score was 77 (early group, range 25-­‐ 100 and late group, range 27-­‐100) for both groups (p= 0.86), Constant-­‐Murley Score (p= 0.91), Oxford Shoulder Score (p= 0.79), EQ-­‐5D index (p= 0.86). Re-­‐tear frequency was 24% for both groups, and the patients with re-­‐tear reported less satisfaction with outcome. Discussion and conclusion: This study shows that surgical repair of TRCT performed later than three months after injury may result in good functional outcomes and patient satisfaction. However, this does not motivate an intentional delay in surgery when there is an indication for surgical repair as that delay may adversely affect the possibility to perform a repair. Our results show that surgeons may safely consider surgical repair even if a delay in diagnosis has occurred. A retrospective cohort study on 75 shoulders shows good functional result after traumatic rotator cuff tear (TRCT) treated surgically up to one year after the injury.

Keywords: traumatic rotator cuff injury, time to surgery, surgical outcome, retrospective cohort study

Procedia PDF Downloads 214
465 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications

Authors: Gema M. Rodado, Jose M. Olavarrieta

Abstract:

Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.

Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests

Procedia PDF Downloads 109
464 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 94
463 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: lean production system, single minute exchange of dies, signal to noise ratio, Taguchi robust design, waste

Procedia PDF Downloads 118
462 Creation of Computerized Benchmarks to Facilitate Preparedness for Biological Events

Authors: B. Adini, M. Oren

Abstract:

Introduction: Communicable diseases and pandemics pose a growing threat to the well-being of the global population. A vital component of protecting the public health is the creation and sustenance of a continuous preparedness for such hazards. A joint Israeli-German task force was deployed in order to develop an advanced tool for self-evaluation of emergency preparedness for variable types of biological threats. Methods: Based on a comprehensive literature review and interviews with leading content experts, an evaluation tool was developed based on quantitative and qualitative parameters and indicators. A modified Delphi process was used to achieve consensus among over 225 experts from both Germany and Israel concerning items to be included in the evaluation tool. Validity and applicability of the tool for medical institutions was examined in a series of simulation and field exercises. Results: Over 115 German and Israeli experts reviewed and examined the proposed parameters as part of the modified Delphi cycles. A consensus of over 75% of experts was attained for 183 out of 188 items. The relative importance of each parameter was rated as part of the Delphi process, in order to define its impact on the overall emergency preparedness. The parameters were integrated in computerized web-based software that enables to calculate scores of emergency preparedness for biological events. Conclusions: The parameters developed in the joint German-Israeli project serve as benchmarks that delineate actions to be implemented in order to create and maintain an ongoing preparedness for biological events. The computerized evaluation tool enables to continuously monitor the level of readiness and thus strengths and gaps can be identified and corrected appropriately. Adoption of such a tool is recommended as an integral component of quality assurance of public health and safety.

Keywords: biological events, emergency preparedness, bioterrorism, natural biological events

Procedia PDF Downloads 416
461 Research and Application of Multi-Scale Three Dimensional Plant Modeling

Authors: Weiliang Wen, Xinyu Guo, Ying Zhang, Jianjun Du, Boxiang Xiao

Abstract:

Reconstructing and analyzing three-dimensional (3D) models from situ measured data is important for a number of researches and applications in plant science, including plant phenotyping, functional-structural plant modeling (FSPM), plant germplasm resources protection, agricultural technology popularization. It has many scales like cell, tissue, organ, plant and canopy from micro to macroscopic. The techniques currently used for data capture, feature analysis, and 3D reconstruction are quite different of different scales. In this context, morphological data acquisition, 3D analysis and modeling of plants on different scales are introduced systematically. The commonly used data capture equipment for these multiscale is introduced. Then hot issues and difficulties of different scales are described respectively. Some examples are also given, such as Micron-scale phenotyping quantification and 3D microstructure reconstruction of vascular bundles within maize stalks based on micro-CT scanning, 3D reconstruction of leaf surfaces and feature extraction from point cloud acquired by using 3D handheld scanner, plant modeling by combining parameter driven 3D organ templates. Several application examples by using the 3D models and analysis results of plants are also introduced. A 3D maize canopy was constructed, and light distribution was simulated within the canopy, which was used for the designation of ideal plant type. A grape tree model was constructed from 3D digital and point cloud data, which was used for the production of science content of 11th international conference on grapevine breeding and genetics. By using the tissue models of plants, a Google glass was used to look around visually inside the plant to understand the internal structure of plants. With the development of information technology, 3D data acquisition, and data processing techniques will play a greater role in plant science.

Keywords: plant, three dimensional modeling, multi-scale, plant phenotyping, three dimensional data acquisition

Procedia PDF Downloads 271
460 Sea Surface Temperature and Climatic Variables as Drivers of North Pacific Albacore Tuna Thunnus Alalunga Time Series

Authors: Ashneel Ajay Singh, Naoki Suzuki, Kazumi Sakuramoto, Swastika Roshni, Paras Nath, Alok Kalla

Abstract:

Albacore tuna (Thunnus alalunga) is one of the commercially important species of tuna in the North Pacific region. Despite the long history of albacore fisheries in the Pacific, its ecological characteristics are not sufficiently understood. The effects of changing climate on numerous commercially and ecologically important fish species including albacore tuna have been documented over the past decades. The objective of this study was to explore and elucidate the relationship of environmental variables with the stock parameters of albacore tuna. The relationship of the North Pacific albacore tuna recruitment (R), spawning stock biomass (SSB) and recruits per spawning biomass (RPS) from 1970 to 2012 with the environmental factors of sea surface temperature (SST), Pacific decadal oscillation (PDO), El Niño southern oscillation (ENSO) and Pacific warm pool index (PWI) was construed. SST and PDO were used as independent variables with SSB to construct stock reproduction models for R and RPS as they showed most significant relationship with the dependent variables. ENSO and PWI were excluded due to collinearity effects with SST and PDO. Model selections were based on R2 values, Akaike Information Criterion (AIC) and significant parameter estimates at p<0.05. Models with single independent variables of SST, PDO, ENSO and PWI were also constructed to illuminate their individual effect on albacore R and RPS. From the results it can be said that SST and PDO resulted in the most significant models for reproducing North Pacific albacore tuna R and RPS time series. SST has the highest impact on albacore R and RPS when comparing models with single environmental variables. It is important for fishery managers and decision makers to incorporate the findings into their albacore tuna management plans for the North Pacific Oceanic region.

Keywords: Albacore tuna, El Niño southern oscillation, Pacific decadal oscillation, sea surface temperature

Procedia PDF Downloads 221
459 Comparative Evaluation of Pentazocine and Tramadol as Pre-Emptive Analgesics for Ovariohysterectomy in Female Dogs

Authors: Venkatgiri, Ranganath, L. Nagaraja, B. N. Sagar Pandav, S. M. Usturge, D. Dilipkumar, B. V. Shivprakash, B. Bhagwanthappa, D. Jahangir

Abstract:

A comparative evaluation of Tramadol and Pentazocine as a pre-emptive analgesic in clinical cases of female dogs undergoing ovariohysterectomy was undertaken during this study. During the study, the following parameters were assessed viz., Rectal temperature (ᵒF), Respiratory rate (breaths/min) and Heart rate (beats/min). Hematological and biochemical parameters viz., total erythrocyte count (TEC) (millions/cmm), hemoglobin (g %), otal leucocytes count (TLC) (thousands/cmm), differential leucocytes count (DLC) (%), serum creatinine (mg/dl), plasma protein (mg/dl), blood glucose (mg/dl) was estimated before the surgery and after administration of general anaesthesia and immediate postoperative periods of 0, 12 and 24 hr respectively. Mean Total Pain Score (MTPS) includes measurement of parameters like posture, vocalization, activity level, response to palpation and agitation at different intervals was calculated before surgery and after administration of general anesthesia and post-operative periods of 1, 2, 4, 6, 12hrs and 24 hrs respectively. Mean Total Pain Score (MTPS) was given for each parameter (Posture, Vocalization, Activity Level, Response to Palpation and Agitation) like 0,1,2,3. (maximum score will be given was 4.). Results were revealed in all three groups including control group. There were significant minor alterations in physiological, hematological and biochemical parameters. MTPS (mean total pain score) were revealed and found a significant alteration when compared with control group. In conclusion, Tramadol found to be a better analgesic and had up to 8hrs of analgesic effect and Pentazocine is superior in post-operative pain management when compared to Tramadol because this group of dogs experienced less surgical stress, consumed less anesthetic dose, they recovered early, and they had less MTPS score.

Keywords: dog, pentazocine, tramadol, ovariohysterectomy

Procedia PDF Downloads 156
458 Dislocation Density-Based Modeling of the Grain Refinement in Surface Mechanical Attrition Treatment

Authors: Reza Miresmaeili, Asghar Heydari Astaraee, Fereshteh Dolati

Abstract:

In the present study, an analytical model based on dislocation density model was developed to simulate grain refinement in surface mechanical attrition treatment (SMAT). The correlation between SMAT time and development in plastic strain on one hand, and dislocation density evolution, on the other hand, was established to simulate the grain refinement in SMAT. A dislocation density-based constitutive material law was implemented using VUHARD subroutine. A random sequence of shots is taken into consideration for multiple impacts model using Python programming language by utilizing a random function. The simulation technique was to model each impact in a separate run and then transferring the results of each run as initial conditions for the next run (impact). The developed Finite Element (FE) model of multiple impacts describes the coverage evolution in SMAT. Simulations were run to coverage levels as high as 4500%. It is shown that the coverage implemented in the FE model is equal to the experimental coverage. It is depicted that numerical SMAT coverage parameter is adequately conforming to the well-known Avrami model. Comparison between numerical results and experimental measurements for residual stresses and depth of deformation layers confirms the performance of the established FE model for surface engineering evaluations in SMA treatment. X-ray diffraction (XRD) studies of grain refinement, including resultant grain size and dislocation density, were conducted to validate the established model. The full width at half-maximum in XRD profiles can be used to measure the grain size. Numerical results and experimental measurements of grain refinement illustrate good agreement and show the capability of established FE model to predict the gradient microstructure in SMA treatment.

Keywords: dislocation density, grain refinement, severe plastic deformation, simulation, surface mechanical attrition treatment

Procedia PDF Downloads 129
457 Geostatistical Models to Correct Salinity of Soils from Landsat Satellite Sensor: Application to the Oran Region, Algeria

Authors: Dehni Abdellatif, Lounis Mourad

Abstract:

The new approach of applied spatial geostatistics in materials sciences, agriculture accuracy, agricultural statistics, permitted an apprehension of managing and monitoring the water and groundwater qualities in a relationship with salt-affected soil. The anterior experiences concerning data acquisition, spatial-preparation studies on optical and multispectral data has facilitated the integration of correction models of electrical conductivity related with soils temperature (horizons of soils). For tomography apprehension, this physical parameter has been extracted from calibration of the thermal band (LANDSAT ETM+6) with a radiometric correction. Our study area is Oran region (Northern West of Algeria). Different spectral indices are determined such as salinity and sodicity index, the Combined Spectral Reflectance Index (CSRI), Normalized Difference Vegetation Index (NDVI), emissivity, Albedo, and Sodium Adsorption Ratio (SAR). The approach of geostatistical modeling of electrical conductivity (salinity), appears to be a useful decision support system for estimating corrected electrical resistivity related to the temperature of surface soils, according to the conversion models by substitution, the reference temperature at 25°C (where hydrochemical data are collected with this constraint). The Brightness temperatures extracted from satellite reflectance (LANDSAT ETM+) are used in consistency models to estimate electrical resistivity. The confusions that arise from the effects of salt stress and water stress removed followed by seasonal application of the geostatistical analysis in Geographic Information System (GIS) techniques investigation and monitoring the variation of the electrical conductivity in the alluvial aquifer of Es-Sénia for the salt-affected soil.

Keywords: geostatistical modelling, landsat, brightness temperature, conductivity

Procedia PDF Downloads 432
456 Spatial Variation of WRF Model Rainfall Prediction over Uganda

Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Triphonia Ngailo

Abstract:

Rainfall is a major climatic parameter affecting many sectors such as health, agriculture and water resources. Its quantitative prediction remains a challenge to weather forecasters although numerical weather prediction models are increasingly being used for rainfall prediction. The performance of six convective parameterization schemes, namely the Kain-Fritsch scheme, the Betts-Miller-Janjic scheme, the Grell-Deveny scheme, the Grell-3D scheme, the Grell-Fretas scheme, the New Tiedke scheme of the weather research and forecast (WRF) model regarding quantitative rainfall prediction over Uganda is investigated using the root mean square error for the March-May (MAM) 2013 season. The MAM 2013 seasonal rainfall amount ranged from 200 mm to 900 mm over Uganda with northern region receiving comparatively lower rainfall amount (200–500 mm); western Uganda (270–550 mm); eastern Uganda (400–900 mm) and the lake Victoria basin (400–650 mm). A spatial variation in simulated rainfall amount by different convective parameterization schemes was noted with the Kain-Fritsch scheme over estimating the rainfall amount over northern Uganda (300–750 mm) but also presented comparable rainfall amounts over the eastern Uganda (400–900 mm). The Betts-Miller-Janjic, the Grell-Deveny, and the Grell-3D underestimated the rainfall amount over most parts of the country especially the eastern region (300–600 mm). The Grell-Fretas captured rainfall amount over the northern region (250–450 mm) but also underestimated rainfall over the lake Victoria Basin (150–300 mm) while the New Tiedke generally underestimated rainfall amount over many areas of Uganda. For deterministic rainfall prediction, the Grell-Fretas is recommended for rainfall prediction over northern Uganda while the Kain-Fritsch scheme is recommended over eastern region.

Keywords: convective parameterization schemes, March-May 2013 rainfall season, spatial variation of parameterization schemes over Uganda, WRF model

Procedia PDF Downloads 305
455 Glacier Dynamics and Mass Fluctuations in Western Himalayas: A Comparative Analysis of Pir-Panjal and Greater Himalayan Ranges in Jhelum Basin, India

Authors: Syed Towseef Ahmad, Fatima Amin, Pritha Acharya, Anil K. Gupta, Pervez Ahmad

Abstract:

Glaciers being the sentinels of climate change, are the most visible evidence of global warming. Given the unavailability of observed field-based data, this study has focussed on the use of geospatial techniques to obtain information about the glaciers of Pir-Panjal (PPJ) and the Great Himalayan Regions of Jhelum Basin (GHR). These glaciers need to be monitored in line with the variations in climatic conditions because they significantly contribute to various sectors in the region. The main aim of this study is to map the glaciers in the two adjacent regions (PPJ and GHR) in the north-western Himalayas with different topographies and compare the changes in various glacial attributes during two different time periods (1990-2020). During the last three decades, both PPJ as well as GHR regions have observed deglaciation of around 36 and 26 percent, respectively. The mean elevation of GHR glaciers has increased from 4312 to 4390 masl, while the same for PPJ glaciers has increased from 4085 to 4124 masl during the observation period. Using accumulation area ratio (AAR) method, mean mass balance of -34.52 and -37.6 cm.w.e was recorded for the glaciers of GHR and PPJ, respectively. The difference in areal and mass loss of glaciers in these regions may be due to (i) the smaller size of PPJ glaciers which are all smaller than 1 km² and are thus more responsive to climate change (ii) Higher mean elevation of GHR glaciers (iii) local variations in climatic variables in these glaciated regions. Time series analysis of climate variables indicates that both the mean maximum and minimum temperatures of Qazigund station (Tmax= 19.2, Tmin= 6.4) are comparatively higher than the Pahalgam station (Tmax= 18.8, Tmin= 3.2). Except for precipitation in Qazigund (Slope= - 0.3 mm a⁻¹), each climatic parameter has shown an increasing trend during these three decades, and with the slope of 0.04 and 0.03°c a⁻¹, the positive trend in Tmin (pahalgam) and Tmax (qazigund) are observed to be statistically significant (p≤0.05).

Keywords: glaciers, climate change, Pir-Panjal, greater Himalayas, mass balance

Procedia PDF Downloads 73
454 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 245
453 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset

Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli

Abstract:

Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context.

Keywords: attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence

Procedia PDF Downloads 71
452 An Exploratory Study on the Integration of Neurodiverse University Students into Mainstream Learning and Their Performance: The Case of the Jones Learning Center

Authors: George Kassar, Phillip A. Cartwright

Abstract:

Based on data collected from The Jones Learning Center (JLC), University of the Ozarks, Arkansas, U.S., this study explores the impact of inclusive classroom practices on neuro-diverse college students’ and their consequent academic performance having participated in integrative therapies designed to support students who are intellectually capable of obtaining a college degree, but who require support for learning challenges owing to disabilities, AD/HD, or ASD. The purpose of this study is two-fold. The first objective is to explore the general process, special techniques, and practices of the (JLC) inclusive program. The second objective is to identify and analyze the effectiveness of the processes, techniques, and practices in supporting the academic performance of enrolled college students with learning disabilities following integration into mainstream university learning. Integrity, transparency, and confidentiality are vital in the research. All questions were shared in advance and confirmed by the concerned management at the JLC. While administering the questionnaire as well as conducted the interviews, the purpose of the study, its scope, aims, and objectives were clearly explained to all participants prior starting the questionnaire / interview. Confidentiality of all participants assured and guaranteed by using encrypted identification of individuals, thus limiting access to data to only the researcher, and storing data in a secure location. Respondents were also informed that their participation in this research is voluntary, and they may withdraw from it at any time prior to submission if they wish. Ethical consent was obtained from the participants before proceeding with videorecording of the interviews. This research uses a mixed methods approach. The research design involves collecting, analyzing, and “mixing” quantitative and qualitative methods and data to enable a research inquiry. The research process is organized based on a five-pillar approach. The first three pillars are focused on testing the first hypothesis (H1) directed toward determining the extent to the academic performance of JLC students did improve after involvement with comprehensive JLC special program. The other two pillars relate to the second hypothesis (H2), which is directed toward determining the extent to which collective and applied knowledge at JLC is distinctive from typical practices in the field. The data collected for research were obtained from three sources: 1) a set of secondary data in the form of Grade Point Average (GPA) received from the registrar, 2) a set of primary data collected throughout structured questionnaire administered to students and alumni at JLC, and 3) another set of primary data collected throughout interviews conducted with staff and educators at JLC. The significance of this study is two folds. First, it validates the effectiveness of the special program at JLC for college-level students who learn differently. Second, it identifies the distinctiveness of the mix of techniques, methods, and practices, including the special individualized and personalized one-on-one approach at JLC.

Keywords: education, neuro-diverse students, program effectiveness, Jones learning center

Procedia PDF Downloads 66
451 Modeling of Timing in a Cyber Conflict to Inform Critical Infrastructure Defense

Authors: Brian Connett, Bryan O'Halloran

Abstract:

Systems assets within critical infrastructures were seemingly safe from the exploitation or attack by nefarious cyberspace actors. Now, critical infrastructure is a target and the resources to exploit the cyber physical systems exist. These resources are characterized in terms of patience, stealth, replication-ability and extraordinary robustness. System owners are obligated to maintain a high level of protection measures. The difficulty lies in knowing when to fortify a critical infrastructure against an impending attack. Models currently exist that demonstrate the value of knowing the attacker’s capabilities in the cyber realm and the strength of the target. The shortcomings of these models are that they are not designed to respond to the inherent fast timing of an attack, an impetus that can be derived based on open-source reporting, common knowledge of exploits of and the physical architecture of the infrastructure. A useful model will inform systems owners how to align infrastructure architecture in a manner that is responsive to the capability, willingness and timing of the attacker. This research group has used an existing theoretical model for estimating parameters, and through analysis, to develop a decision tool for would-be target owners. The continuation of the research develops further this model by estimating the variable parameters. Understanding these parameter estimations will uniquely position the decision maker to posture having revealed the vulnerabilities of an attacker’s, persistence and stealth. This research explores different approaches to improve on current attacker-defender models that focus on cyber threats. An existing foundational model takes the point of view of an attacker who must decide what cyber resource to use and when to use it to exploit a system vulnerability. It is valuable for estimating parameters for the model, and through analysis, develop a decision tool for would-be target owners.

Keywords: critical infrastructure, cyber physical systems, modeling, exploitation

Procedia PDF Downloads 187