Search results for: interval graph
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1257

Search results for: interval graph

957 Hybrid Collaborative-Context Based Recommendations for Civil Affairs Operations

Authors: Patrick Cummings, Laura Cassani, Deirdre Kelliher

Abstract:

In this paper we present findings from a research effort to apply a hybrid collaborative-context approach for a system focused on Marine Corps civil affairs data collection, aggregation, and analysis called the Marine Civil Information Management System (MARCIMS). The goal of this effort is to provide operators with information to make sense of the interconnectedness of entities and relationships in their area of operation and discover existing data to support civil military operations. Our approach to build a recommendation engine was designed to overcome several technical challenges, including 1) ensuring models were robust to the relatively small amount of data collected by the Marine Corps civil affairs community; 2) finding methods to recommend novel data for which there are no interactions captured; and 3) overcoming confirmation bias by ensuring content was recommended that was relevant for the mission despite being obscure or less well known. We solve this by implementing a combination of collective matrix factorization (CMF) and graph-based random walks to provide recommendations to civil military operations users. We also present a method to resolve the challenge of computation complexity inherent from highly connected nodes through a precomputed process.

Keywords: Recommendation engine, collaborative filtering, context based recommendation, graph analysis, coverage, civil affairs operations, Marine Corps

Procedia PDF Downloads 102
956 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J

Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa

Abstract:

A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.

Keywords: critical path, transportation network, connectivity reliability, network model, Neo4j application, edge betweenness centrality index

Procedia PDF Downloads 103
955 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 22
954 The Mediating Role of Social Connectivity in the Effect of Positive Personality and Alexithymia on Life Satisfaction: Analysis Based on Structural Equation Model

Authors: Yulin Zhang, Kaixi Dong, Guozhen Zhao

Abstract:

Background: Different levels of life satisfaction are associated with some individual differences. Understanding the mechanism between them will help to enhance an individual’s well-being. On the one hand, traditional personality such as extraversion has been considered as the most stable and effective factor in predicting life satisfaction to the author’s best knowledge. On the other, individual emotional difference, such as alexithymia (difficulties identifying and describing one’s own feelings), is also closely related to life satisfaction. With the development of positive psychology, positive personalities such as virtues attract wide attention. And according to the broaden-and-build theory, social connectivity may mediate between emotion and life satisfaction. Therefore, the current study aims to explore the mediating role of social connectivity in the effect of positive personality and alexithymia on life satisfaction. Method: This study was conducted with 318 healthy Chinese college students whose age range from 18 to 30. Positive personality (including interpersonal, vitality, and cautiousness) was measured by the Chinese version of Values in Action Inventory of Strengths (VIA-IS). Alexithymia was measured by the Toronto Alexithymia Scale (TAS), and life satisfaction was measured by Satisfaction With Life Scale (SWLS). And social connectivity was measured by six items which have been used in previous studies. Each scale showed high reliability and validity. The mediating model was examined in Mplus 7.2 within a structural equation modeling (SEM) framework. Findings: The model fitted well and results revealed that both positive personality (95% confidence interval of indirect effect was [0.023, 0.097]) and alexithymia (95% confidence interval of indirect effect was [-0.270, -0.089]) predicted life satisfaction level significantly through social connectivity. Also, only positive personality significantly and directly predicted life satisfaction compared to alexithymia (95% confidence interval of direct effect was [0.109, 0.260]). Conclusion: Alexithymia predicts life satisfaction only through social connectivity, which emphasizes the importance of social bonding in enhancing the well-being of Chinese college students with alexithymia. And the positive personality can predict life satisfaction directly or through social connectivity, which provides implications for enhancing the well-being of Chinese college students by cultivating their virtue and positive psychological quality.

Keywords: alexithymia, life satisfaction, positive personality, social connectivity

Procedia PDF Downloads 146
953 Attentional Engagement for Movie

Authors: Wuon-Shik Kim, Hyoung-Min Choi, Jeonggeon Woo, Sun Jung Kwon, SeungHee Lee

Abstract:

The research on attentional engagement (AE) in movies using physiological signals is rare and controversial. Therefore, whether physiological responses can be applied to evaluate AE in actual movies is unclear. To clarify this, we measured electrocardiogram and electroencephalogram (EEG) of 16 Japanese university students as they watched the American movie Iron Man. After the viewing, we evaluated the subjective AE and affection levels for 11 film content segments in Iron Man. Based on self-reports for AE, we selected two film content segments as stimuli: Film Content 9 describing Tony Stark (the main character) flying through the night sky (with the highest AE score) and Film Content 1, describing Tony Stark and his colleagues telling indecent jokes (with the lowest score). We divided these two content segments into two time intervals, respectively. Results indicated that the Film Content by Interval interaction for HR was significant, at F (1, 11)=35.64, p<.001, η2=.76; while HR in Film Content 1 decreased, that of in Film Content 9 increased. In Film Content 9, the main effects of the Interval for respiratory sinus arrhythmia (RSA) (F (1, 11)=5.91, p<.05, η2=.35) and for the attention index of EEG (F (1, 11)=5.23, p<.05, η2=.37) were significant. The increase in the RSA was significant (p<.05) as well, whereas that of the EEG attention index was nearly significant (p=.069). In conclusion, while RSA increases, HR decreases when people direct their attention toward normal films. However, while paying attention to a film evoking excitement, HR as well as RSA can increase.

Keywords: attentional engagement, electroencephalogram, movie, respiratory sinus arrhythmia

Procedia PDF Downloads 342
952 A Review of Paleo-Depositional Environment and Thermal Alteration Index of Carboniferous, Permian and Triassic of A1-9 well, NW Libya

Authors: Mohamed Ali Alrabib

Abstract:

This paper introduces a paleoenvironmental and hydrocarbon show in this well was identified in the interval of Dembaba formation to the Hassaona formation was poor to very poor oil show. And from palaeoenvironmental analysis there is neither particularly good reservoir nor source rock have been developed in the area. Recent palaeoenvironment work undertakes that the sedimentary succession in this area comprises the Upper Paleozoic rock of the Carboniferous and Permian and the Mesozoic (Triassic) sedimentary sequences. No early Paleozoic rocks have been found in this area, these rocks were eroding during the Late Carboniferous and Early Permian time. During Latest Permian and earliest Triassic time evidence for major marine transgression has occurred. From depths 5930-5940 feet, to 10800-10810 feet, the TAI of the Al Guidr, the Bir Al Jaja Al Uotia, Hebilia and the top varies between 3+ to 4-(mature-dry gas). This interval corporate the rest part of the Dembaba Formation. From depth 10800- 10810 feet, until total sediment depth (11944 feet Log) which corporate the rest of the Dembaba and underlying equivalents of the Assedjefar and M rar Formations and the underlying Indeterminate unit (Hassouna Formation) the TAI varies between 4 and 5 (dry gas-black& deformed).

Keywords: paleoenveronments, thermail index, carboniferous, Libya

Procedia PDF Downloads 397
951 Effect of Drought Stress on Yield and Yield Components of Maize Cultivars in Golestan Province

Authors: Mojtaba Esmaeilzad Limoudehi, Ebrahim Amiri

Abstract:

Water scarcity is now one of the leading challenges for human societies. In this regard, recognizing the relationship between soil, water, plant growth, and plant response to stress is very significant. In this paper, considering the importance of drought stress and the role of choosing suitable cultivars in resistance against drought, a split-plot experiment using early, intermediate, and late-maturing cultivars was carried out in Katul filed, Golestan province during two cultivation years of 2015 and 2016. The main factor was irrigation intervals at four levels, including 7 days, 14 days, 21 days, and 28 days. The subfactor was the subplot of six maize cultivars (two early maturing cultivars, two medium maturing cultivars, and two late-maturing cultivars). The results of variance analysis have revealed that irrigation interval and cultivars treatment have significant effects on the number of grain in each corn, number of rows in each corn, number of grain per row, the weight of 1000 grains, grain yield, and biomass yield. Although, the interaction of these two factors on the mentioned attributes was meaningful. The best grain yield was achieved at 7 days irrigation interval and late maturing maize cultivars treatment, which was equal to 12301 kg/ha.

Keywords: corn, growth period, optimization, stress

Procedia PDF Downloads 114
950 The Application of Insects in Forensic Investigations

Authors: Shirin Jalili, Hadi Shirzad, Samaneh Nabavi, Somayeh Khanjani

Abstract:

Forensic entomology is the science of study and analysis of insects evidences to aid in criminal investigation. Being aware of the distribution, biology, ecology and behavior of insects, which are founded at crime scene can provide information about when, where and how the crime has been committed. It has many application in criminal investigations. Its main use is estimation of the minimum time after death in suspicious death. The close association between insects and corpses and the use of insects in criminal investigations is the subject of forensic entomology. Because insects attack to the decomposing corpse and spawning on it from the initial stages. Forensic scientists can estimate the postmortem index by studying the insects population and the developing larval stages.In addition, toxicological and molecular studies of these insects can reveal the cause of death or even the identity of a victim. It also be used to detect drugs and poisons, and determination of incident location. Gathering robust entomological evidences is made possible for experts by recent Techniques. They can provide vital information about death, corpse movement or burial, submersion interval, time of decapitation, identification of specific sites of trauma, post-mortem artefacts on the body, use of drugs, linking a suspect to the scene of a crime, sexual molestations and the identification of suspects.

Keywords: Forensic entomology, post mortem interval, insects, larvae

Procedia PDF Downloads 480
949 Web Proxy Detection via Bipartite Graphs and One-Mode Projections

Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo

Abstract:

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

Keywords: bipartite graph, one-mode projection, clustering, web proxy detection

Procedia PDF Downloads 223
948 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables

Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner

Abstract:

High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)

Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line

Procedia PDF Downloads 133
947 Multi-Criteria Test Case Selection Using Ant Colony Optimization

Authors: Niranjana Devi N.

Abstract:

Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.

Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection

Procedia PDF Downloads 633
946 Determinants of Diarrhoea Prevalence Variations in Mountainous Informal Settlements of Kigali City, Rwanda

Authors: Dieudonne Uwizeye

Abstract:

Introduction: Diarrhoea is one of the major causes of morbidity and mortality among communities living in urban informal settlements of developing countries. It is assumed that mountainous environment introduces variations of the burden among residents of the same settlements. Design and Objective: A cross-sectional study was done in Kigali to explore the effect of mountainous informal settlements on diarrhoea risk variations. Data were collected among 1,152 households through household survey and transect walk to observe the status of sanitation. The outcome variable was the incidence of diarrhoea among household members of any age. The study used the most knowledgeable person in the household as the main respondent. Mostly this was the woman of the house as she was more likely to know the health status of every household member as she plays various roles: mother, wife, and head of the household among others. The analysis used cross tabulation and logistic regression analysis. Results: Results suggest that risks for diarrhoea vary depending on home location in the settlements. Diarrhoea risk increased as the distance from the road increased. The results of the logistic regression analysis indicate the adjusted odds ratio of 2.97 with 95% confidence interval being 1.35-6.55 and 3.50 adjusted odds ratio with 95% confidence interval being 1.61-7.60 in level two and three respectively compared with level one. The status of sanitation within and around homes was also significantly associated with the increase of diarrhoea. Equally, it is indicated that stable households were less likely to have diarrhoea. The logistic regression analysis indicated the adjusted odds ratio of 0.45 with 95% confidence interval being 0.25-0.81. However, the study did not find evidence for a significant association between diarrhoea risks and household socioeconomic status in the multivariable model. It is assumed that environmental factors in mountainous settings prevailed. Households using the available public water sources were more likely to have diarrhoea in their households. Recommendation: The study recommends the provision and extension of infrastructure for improved water, drainage, sanitation and wastes management facilities. Equally, studies should be done to identify the level of contamination and potential origin of contaminants for water sources in the valleys to adequately control the risks for diarrhoea in mountainous urban settings.

Keywords: urbanisation, diarrhoea risk, mountainous environment, urban informal settlements in Rwanda

Procedia PDF Downloads 143
945 Tool for Analysing the Sensitivity and Tolerance of Mechatronic Systems in Matlab GUI

Authors: Bohuslava Juhasova, Martin Juhas, Renata Masarova, Zuzana Sutova

Abstract:

The article deals with the tool in Matlab GUI form that is designed to analyse a mechatronic system sensitivity and tolerance. In the analysed mechatronic system, a torque is transferred from the drive to the load through a coupling containing flexible elements. Different methods of control system design are used. The classic form of the feedback control is proposed using Naslin method, modulus optimum criterion and inverse dynamics method. The cascade form of the control is proposed based on combination of modulus optimum criterion and symmetric optimum criterion. The sensitivity is analysed on the basis of absolute and relative sensitivity of system function to the change of chosen parameter value of the mechatronic system, as well as the control subsystem. The tolerance is analysed in the form of determining the range of allowed relative changes of selected system parameters in the field of system stability. The tool allows to analyse an influence of torsion stiffness, torsion damping, inertia moments of the motor and the load and controller(s) parameters. The sensitivity and tolerance are monitored in terms of the impact of parameter change on the response in the form of system step response and system frequency-response logarithmic characteristics. The Symbolic Math Toolbox for expression of the final shape of analysed system functions was used. The sensitivity and tolerance are graphically represented as 2D graph of sensitivity or tolerance of the system function and 3D/2D static/interactive graph of step/frequency response.

Keywords: mechatronic systems, Matlab GUI, sensitivity, tolerance

Procedia PDF Downloads 409
944 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets

Authors: Selin Guney, Andres Riquelme

Abstract:

Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.

Keywords: commodity, forecast, fuzzy, Markov

Procedia PDF Downloads 198
943 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm

Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu

Abstract:

Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.

Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model

Procedia PDF Downloads 222
942 A Framework for Secure Information Flow Analysis in Web Applications

Authors: Ralph Adaimy, Wassim El-Hajj, Ghassen Ben Brahim, Hazem Hajj, Haidar Safa

Abstract:

Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company’s financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data’s confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task.

Keywords: web applications security, secure information flow, program dependence graph, database annotation

Procedia PDF Downloads 444
941 Seismic Microzonation Analysis for Damage Mapping of the 2006 Yogyakarta Earthquake, Indonesia

Authors: Fathul Mubin, Budi E. Nurcahya

Abstract:

In 2006, a large earthquake ever occurred in the province of Yogyakarta, which caused considerable damage. This is the basis need to investigate the seismic vulnerability index in around of the earthquake zone. This research is called microzonation of earthquake hazard. This research has been conducted at the site and surrounding of Prambanan Temple, includes homes and civil buildings. The reason this research needs to be done because in the event of an earthquake in 2006, there was damage to the temples at Prambanan temple complex and its surroundings. In this research, data collection carried out for 60 minutes using three component seismograph measurements at 165 points with spacing of 1000 meters. The data recorded in time function were analyzed using the spectral ratio method, known as the Horizontal to Vertical Spectral Ratio (HVSR). Results from this analysis are dominant frequency (Fg) and maximum amplification factor (Ag) are used to obtain seismic vulnerability index. The results of research showed the dominant frequency range from 0.5 to 30 Hz and the amplification is in interval from 0.5 to 9. Interval value for seismic vulnerability index is 0.1 to 50. Based on distribution maps of seismic vulnerability index and impact of buildings damage seemed for suitability. For further research, it needs to survey to the east (klaten) and south (Bantul, DIY) to determine a full distribution maps of seismic vulnerability index.

Keywords: amplification factor, dominant frequency, microzonation analysis, seismic vulnerability index

Procedia PDF Downloads 174
940 Effect of Tissue Preservation Chemicals on Decomposition in Different Soil Types

Authors: Onyekachi Ogbonnaya Iroanya, Taiye Abdullahi Gegele, Frank Tochukwu Egwuatu

Abstract:

Introduction: Forensic taphonomy is a multifaceted area that incorporates decomposition, chemical and biological cadaver exposure in post-mortem event chronology and reconstruction to predict the Post Mortem Interval (PMI). The aim of this study was to evaluate the integrity of DNA extracted from the remains of embalmed decomposed Sus domesticus tissues buried in different soil types. Method: A total of 12 limbs of Sus domesticus weighing between 0.7-1.4 kg were used. Each of the samples across the groups was treated with 10% formaldehyde, absolute methanol and 50% Pine oil for 24 hours before burial except the control samples, which were buried immediately. All samples were buried in shallow simulated Clay, Sandy and Loamy soil graves for 12 months. The DNA for each sample was extracted and quantified with Nanodrop Spectrophotometer (6305 JENWAY spectrometers). The rate of decomposition was examined through the modified qualitative decomposition analysis. Extracted DNA was amplified through PCR and bands visualized via gel electrophoresis. A biochemical enzyme assay was done for each burial grave soil. Result: The limbs in all burial groups had lost weight over the burial period. There was a significant increase in the soil urease level in the samples preserved in formaldehyde across the 3 soil type groups (p≤0.01). Also, the control grave soils recorded significantly higher alkaline phosphatase, dehydrogenase and calcium carbonate values compared to experimental grave soils (p≤0.01). The experimental samples showed a significant decrease in DNA concentration and purity when compared to the control groups (p≤0.01). Obtained findings of the soil biochemical analysis showed the embalming treatment altered the relationship between organic matter decomposition and soil biochemical properties as observed in the fluctuations that were recorded in the soil biochemical parameters. The PCR amplified DNA showed no bands on the gel electrophoresis plates. Conclusion: In criminal investigations, factors such as burial grave soil, grave soil biochemical properties, antemortem exposure to embalming chemicals should be considered in post-mortem interval (PMI) determination.

Keywords: forensic taphonomy, post-mortem interval (PMI), embalmment, decomposition, grave soil

Procedia PDF Downloads 137
939 Process Optimization for Albanian Crude Oil Characterization

Authors: Xhaklina Cani, Ilirjan Malollari, Ismet Beqiraj, Lorina Lici

Abstract:

Oil characterization is an essential step in the design, simulation, and optimization of refining facilities. To achieve optimal crude selection and processing decisions, a refiner must have exact information refer to crude oil quality. This includes crude oil TBP-curve as the main data for correct operation of refinery crude oil atmospheric distillation plants. Crude oil is typically characterized based on a distillation assay. This procedure is reasonably well-defined and is based on the representation of the mixture of actual components that boil within a boiling point interval by hypothetical components that boil at the average boiling temperature of the interval. The crude oil assay typically includes TBP distillation according to ASTM D-2892, which can characterize this part of oil that boils up to 400 C atmospheric equivalent boiling point. To model the yield curves obtained by physical distillation is necessary to compare the differences between the modelling and the experimental data. Most commercial use a different number of components and pseudo-components to represent crude oil. Laboratory tests include distillations, vapor pressures, flash points, pour points, cetane numbers, octane numbers, densities, and viscosities. The aim of the study is the drawing of true boiling curves for different crude oil resources in Albania and to compare the differences between the modeling and the experimental data for optimal characterization of crude oil.

Keywords: TBP distillation curves, crude oil, optimization, simulation

Procedia PDF Downloads 283
938 Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery

Authors: S. Benkraouda, Z. Djelloul-Khedda, B. Yagoubi

Abstract:

we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.

Keywords: forest fire, forest fire detection, satellite image, normal distribution, theoretical gaussian model, thermal infrared matrix image

Procedia PDF Downloads 115
937 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 110
936 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing

Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima

Abstract:

Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.

Keywords: reverse osmosis, washing condition optimization, hypochlorous acid, biofouling control

Procedia PDF Downloads 320
935 A Review of Paleo-Depositional Environment and Thermal Alteration Index of Carboniferous, Permian, and Triassic of A1-9 Well, NW Libya

Authors: M. A. Alrabib, Y. Sherif, A. K. Mohamed, E. A. Elfandi, E. I. Fandi

Abstract:

This paper introduces a paleo-environmental and hydrocarbon show in this well was identified in the interval of Dembaba formation to the Hassaona Formation was poor to very poor oil show. And from palaeo-environmental analysis there is neither particularly good reservoir nor source rock have been developed in the area. Recent palaeo-environment work undertakes that the sedimentary succession in this area comprises the Upper Paleozoic rock of the Carboniferous and Permian and the Mesozoic (Triassic) sedimentary sequences. No early Paleozoic rocks have been found in this area, these rocks were eroding during the Late Carboniferous and Early Permian time. During Latest Permian and earliest Triassic time evidence for major marine transgression has occurred. From depths 5930-5940 feet, to 10800-10810 feet, the TAI of the Al Guidr, the Bir Al Jaja Al Uotia, Hebilia and the top varies between 3+ to 4-(mature-dry gas). This interval corporate the rest part of the Dembaba Formation. From depth 10800- 10810 feet, until total sediment depth (11944 feet Log) which corporate the rest of the Dembaba and underlying equivalents of the Assedjefar and M Rar Formations and the underlying Indeterminate unit (Hassouna Formation) the TAI varies between 4 and 5 (dry gas-black and deformed).

Keywords: paleoenvironmental, thermal alteration index, north western Libya, hydrocarbon

Procedia PDF Downloads 448
934 Aquatic Therapy Improving Balance Function of Individuals with Stroke: A Systematic Review with Meta-Analysis

Authors: Wei-Po Wu, Wen-Yu Liu, Wei−Ting Lin, Hen-Yu Lien

Abstract:

Introduction: Improving balance function for individuals after stroke is a crucial target in physiotherapy. Aquatic therapy which challenges individual’s postural control in an unstable fluid environment may be beneficial in enhancing balance functions. The purposes of the systematic review with meta-analyses were to validate the effects of aquatic therapy in improving balance functions for individuals with strokes in contrast to conventional physiotherapy. Method: Available studies were explored from three electronic databases: PubMed, Scopus, and Web of Science. During literature search, the published date of studies was not limited. The study design of the included studies should be randomized controlled trials (RCTs) and the studies should contain at least one outcome measurement of balance function. The PEDro scale was adopted to assess the quality of included studies, while the 'Oxford Centre for Evidence-Based Medicine 2011 Levels of Evidence' was used to evaluate the level of evidence. After the data extraction, studies with same outcome measures were pooled together for meta-analysis. Result: Ten studies with 282 participants were included in analyses. The research qualities of the studies were ranged from fair to good (4 to 8 points). Levels of evidence of the included studies were graded as level 2 and 3. Finally, scores of Berg Balance Scale (BBS), Eye closed force plate center of pressure velocity (anterior-posterior, medial-lateral axis) and Timed up and Go test were pooled and analyzed separately. The pooled results shown improvement in balance function (BBS mean difference (MD): 1.39 points; 95% confidence interval (CI): 0.05-2.29; p=0.002) (Eye closed force plate center of pressure velocity (anterior-posterior axis) MD: 1.39 mm/s; 95% confidence interval (CI): 0.93-1.86; p<0.001) (Eye closed force plate center of pressure velocity (medial-lateral) MD: 1.48 mm/s; 95% confidence interval (CI): 0.15-2.82; p=0.03) and mobility (MD: 0.9 seconds; 95% CI: 0.07-1.73; p=0.03) of stroke individuals after aquatic therapy compared to conventional therapy. Although there were significant differences between two treatment groups, the differences in improvement were relatively small. Conclusion: The aquatic therapy improved general balance function and mobility in the individuals with stroke better than conventional physiotherapy.

Keywords: aquatic therapy, balance function, meta-analysis, stroke, systematic review

Procedia PDF Downloads 167
933 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 214
932 Clinical Impact of Delirium and Antipsychotic Therapy: 10-Year Experience from a Referral Coronary Care Unit

Authors: Niyada Naksuk, Thoetchai Peeraphatdit, Vitaly Herasevich, Peter A. Brady, Suraj Kapa, Samuel J. Asirvatham

Abstract:

Introduction: Little is known about the safety of antipsychotic therapy for delirium in the coronary care unit (CCU). Our aim was to examine the effect of delirium and antipsychotic therapy among CCU patients. Methods: Pre-study Confusion Assessment Method-Intensive Care Unit (CAM–ICU) criteria were implemented in screening consecutive patients admitted to Mayo Clinic, Rochester, the USA from 2004 through 2013. Death status was prospectively ascertained. Results: Of 11,079 study patients, the incidence of delirium was 8.3% (n=925). Delirium was associated with an increased risk of in-hospital mortality (adjusted OR 1.49; 95% CI, 1.08-2.08; P=.02) and one-year mortality among patients who survived from CCU admission (adjusted HR 1.46; 95% CI, 1.12-1.87; P=.005). A total of 792 doses of haloperidol (5 IQR [3-10] mg/day) or quetiapine (25 IQR [13-50] mg/day) were given to 244 patients with delirium. The clinical characteristics of patients with delirium who did and did not receive antipsychotic therapy were not different (baseline corrected QT [QTc] interval 460±61 ms vs. 457±58 ms, respectively; P = 0.57). In comparison to baseline, mean QTc intervals after the first and third doses of the antipsychotics were not significantly prolonged in haloperidol (448±56, 458±57, and 450±50 ms, respectively) or quetiapine groups (459±54, 467±68, and 462±46 ms, respectively) (P > 0.05 for all). Additionally, in-hospital mortality (adjusted OR 0.67; 95% CI, 0.42-1.04; P=.07), ventricular arrhythmia (adjusted OR 0.87; 95% CI, 0.17-3.62; P=.85) and one-year mortality among the hospital survivors (adjusted HR 0.86; 95% CI 0.62-1.17; P = 0.34) were not different in patients with delirium irrespective of whether or not they received antipsychotics. Conclusions: In patients admitted to the CCU, delirium was associated with an increase in both in-hospital and one-year mortality. Low doses of haloperidol and quetiapine appeared to be safe, without an increase in risk of sudden cardiac death, in-hospital mortality, or one-year mortality in carefully monitored patients.

Keywords: arrhythmias, haloperidol, mortality, qtc interval, quetiapine

Procedia PDF Downloads 349
931 Maximum Induced Subgraph of an Augmented Cube

Authors: Meng-Jou Chien, Jheng-Cheng Chen, Chang-Hsiung Tsai

Abstract:

Let maxζG(m) denote the maximum number of edges in a subgraph of graph G induced by m nodes. The n-dimensional augmented cube, denoted as AQn, a variation of the hypercube, possesses some properties superior to those of the hypercube. We study the cases when G is the augmented cube AQn.

Keywords: interconnection network, augmented cube, induced subgraph, bisection width

Procedia PDF Downloads 371
930 Meta-Analysis of Particulate Matter Production in Developing and Developed Countries

Authors: Hafiz Mehtab Gull Nasir

Abstract:

Industrial development and urbanization have significant impacts on air emissions, and their relationship diverges at different stages of economic progress. The revolution further propelled these activities as principal paths to economic and social transformation; nevertheless, the paths also promoted environmental degradation. Resultantly, both developed and developing countries undergone through fast-paced development; in which developed countries implemented legislation towards environmental pollution control however developing countries took the advantage of technology without caring about the environment. In this study, meta-analysis is performed on production of particulate matter (i.e., PM10 and PM2.5) from urbanized cities of first, second and third world countries to assess the air quality. The cities were selected based on ranked set principles. In case of PM10, third world countries showed highest PM level (~95% confidence interval of 0.74-1.86) followed by second world countries but with managed situation. Besides, first, world countries indicated the lowest pollution (~95% confidence interval of 0.12-0.2). Similarly, highest level of PM2.5 was produced by third world countries followed by the second and first world countries. Hereby, level of PM2.5 was not significantly different for both second and third world countries; however, first world countries showed minimum PM load. Finally, the study revealed different that levels of pollution status exist among different countries; whereas developed countries also devised better strategies towards pollution control while developing countries are least caring about their environmental resources. It is suggested that although industrialization and urbanization are directly involved with interference in natural elements, however, production of nature appears to be more societal rather hermetical.

Keywords: meta-analysis, particulate matter, developing countries, urbanization

Procedia PDF Downloads 316
929 Identification of Workplace Hazards of Underground Coal Mines

Authors: Madiha Ijaz, Muhammad Akram, Sima Mir

Abstract:

Underground mining of coal is carried out manually in Pakistan. Exposure to ergonomic hazards (musculoskeletal disorders) are very common among the coal cutters of these mines. Cutting coal in narrow spaces poses a great threat to both upper and lower limbs of these workers. To observe the prevalence of such hazards, a thorough study was conducted on 600 workers from 30 mines (20 workers from 1 mine), located in two districts of province Punjab, Pakistan. Rapid Upper Limb Assessment sheet and Rapid Entire Body Assessment sheet were used for the study along with a standard Nordic Musculoskeleton disorder questionnaire. SPSS, 25, software was used for data analysis on upper and lower limb disorders, and regression analysis models were run for upper and lower back pain. According to the results obtained, it was found that work stages (drilling & blasting, coal cutting, timbering & supporting, etc.), wok experience and number of repetitions performed/minute were significant (with p-value 0.00,0.004 and 0.009, respectively) for discomfort in upper and lower limb. Age got p vale 0.00 for upper limb and 0.012 for lower limb disorder. The task of coal cutting was strongly associated with the pain in upper back (with odd ratios13.21, 95% confidence interval (CI)14.0-21.64)) and lower back pain (3.7, 95% confidence interval 1.3-4.2). scored on RULA and REBA sheets, every work-stage was ranked at 7-highest level of risk involved. Workers were young (mean value of age= 28.7 years) with mean BMI 28.1 kg/m2

Keywords: workplace hazards, ergonomic disorders, limb disorders, MSDs.

Procedia PDF Downloads 59
928 Temperature-Dependent Post-Mortem Changes in Human Cardiac Troponin-T (cTnT): An Approach in Determining Postmortem Interval

Authors: Sachil Kumar, Anoop Kumar Verma, Wahid Ali, Uma Shankar Singh

Abstract:

Globally approximately 55.3 million people die each year. In the India there were 95 lakh annual deaths in 2013. The number of deaths resulted from homicides, suicides and unintentional injuries in the same period was about 5.7 lakh. The ever-increasing crime rate necessitated the development of methods for determining time since death. An erroneous time of death window can lead investigators down the wrong path or possibly focus a case on an innocent suspect. In this regard a research was carried out by analyzing the temperature dependent degradation of a Cardiac Troponin-T protein (cTnT) in the myocardium postmortem as a marker for time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (in the Department of Forensic Medicine and Toxicology, King George’s Medical University, Lucknow India) after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC), 12 0C, 25 0C and 37 0C for different time periods ((~5, 26, 50, 84, 132, 157, 180, 205, and 230 hours). The cases included were the subjects of road traffic accidents (RTA) without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. The data shows a distinct temporal profile corresponding to the degradation of cTnT by proteases found in cardiac muscle. The disappearance of intact cTnT and the appearance of lower molecular weight bands are easily observed. Western blot data clearly showed the intact protein at 42 kDa, two major (27 kDa, 10kDa) fragments, two additional minor fragments (32 kDa) and formation of low molecular weight fragments as time increases. At 12 0C the intensity of band (intact cTnT) decreased steadily as compared to RT, 25 0C and 37 0C. Overall, both PMI and temperature had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 38 h and at the highest temperature, 37 0C. The combination of high temperature (37 0C) and long Postmortem interval (105.15 hrs) had the most drastic effect on the breakdown of cTnT. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the log of the time postmortem. These plots show a good coefficient of correlation of r = 0.95 (p=0.003) for the regression of the human heart at different temperature conditions. The data presented demonstrates that this technique can provide an extended time range during which Postmortem interval can be more accurately estimated.

Keywords: degradation, postmortem interval, proteolysis, temperature, troponin

Procedia PDF Downloads 358