Search results for: Network Time Protocol
18347 Combining in vitro Protein Expression with AlphaLISA Technology to Study Protein-Protein Interaction
Authors: Shayli Varasteh Moradi, Wayne A. Johnston, Dejan Gagoski, Kirill Alexandrov
Abstract:
The demand for a rapid and more efficient technique to identify protein-protein interaction particularly in the areas of therapeutics and diagnostics development is growing. The method described here is a rapid in vitro protein-protein interaction analysis approach based on AlphaLISA technology combined with Leishmania tarentolae cell-free protein production (LTE) system. Cell-free protein synthesis allows the rapid production of recombinant proteins in a multiplexed format. Among available in vitro expression systems, LTE offers several advantages over other eukaryotic cell-free systems. It is based on a fast growing fermentable organism that is inexpensive in cultivation and lysate production. High integrity of proteins produced in this system and the ability to co-express multiple proteins makes it a desirable method for screening protein interactions. Following the translation of protein pairs in LTE system, the physical interaction between proteins of interests is analysed by AlphaLISA assay. The assay is performed using unpurified in vitro translation reaction and therefore can be readily multiplexed. This approach can be used in various research applications such as epitope mapping, antigen-antibody analysis and protein interaction network mapping. The intra-viral protein interaction network of Zika virus was studied using the developed technique. The viral proteins were co-expressed pair-wise in LTE and all possible interactions among viral proteins were tested using AlphaLISA. The assay resulted to the identification of 54 intra-viral protein-protein interactions from which 19 binary interactions were found to be novel. The presented technique provides a powerful tool for rapid analysis of protein-protein interaction with high sensitivity and throughput.Keywords: AlphaLISA technology, cell-free protein expression, epitope mapping, Leishmania tarentolae, protein-protein interaction
Procedia PDF Downloads 24118346 Economics Analysis of Chinese Social Media Platform Sina Weibo and E-Commerce Platform Taobao
Authors: Xingyue Yang
Abstract:
This study focused on Chinese social media stars and the relationship between their level of fame on the social media platform Sina Weibo and their sales revenue on the E-commerce platform Taobao/Tmall.com. This was viewed from the perspective of Adler’s superstardom theory and Rosen and MacDonald’s theories examining the economics of celebrities who build their audience using digital, rather than traditional platforms. Theory and empirical research support the assertion that stars of traditional media achieve popular success due to a combination of talent and market concentration, as well as a range of other factors. These factors are also generally considered relevant to the popularisation of social media stars. However, success across digital media platforms also involves other variables - for example, upload strategies, cross-platform promotions, which often have no direct corollary in traditional media. These factors were the focus of our study, which investigated the relationship between popularity, promotional strategy and sales revenue for 15 social media stars who specialised in culinary topics on the Chinese social media platform Sina Weibo. In 2019, these food bloggers made a total of 2076 Sina Weibo posts, and these were compiled alongside calculations made to determine each food blogger’s sales revenue on the eCommerce platforms Taobao/Tmall. Quantitative analysis was then performed on this data, which determined that certain upload strategies on Weibo - such as upload time, posting format and length of video - have an important impact on the success of sales revenue on Taobao/Tmall.com.Keywords: attention economics, digital media, network effect, social media stars
Procedia PDF Downloads 23818345 Process Driven Architecture For The ‘Lessons Learnt’ Knowledge Sharing Framework: The Case Of A ‘Lessons Learnt’ Framework For KOC
Authors: Rima Al-Awadhi, Abdul Jaleel Tharayil
Abstract:
On a regular basis, KOC engages into various types of Projects. However, due to very nature and complexity involved, each project experience generates a lot of ‘learnings’ that need to be factored into while drafting a new contract and thus avoid repeating the same mistakes. But, many a time these learnings are localized and remain as tacit leading to scope re-work, larger cycle time, schedule overrun, adjustment orders and claims. Also, these experiences are not readily available to new employees leading to steep learning curve and longer time to competency. This is to share our experience in designing and implementing a process driven architecture for the ‘lessons learnt’ knowledge sharing framework in KOC. It high-lights the ‘lessons learnt’ sharing process adopted, integration with the organizational processes, governance framework, the challenges faced and learning from our experience in implementing a ‘lessons learnt’ framework.Keywords: lessons learnt, knowledge transfer, knowledge sharing, successful practices, Lessons Learnt Workshop, governance framework
Procedia PDF Downloads 58018344 Income-Consumption Relationships in Pakistan (1980-2011): A Cointegration Approach
Authors: Himayatullah Khan, Alena Fedorova
Abstract:
The present paper analyses the income-consumption relationships in Pakistan using annual time series data from 1980-81 to 2010-1. The paper uses the Augmented Dickey-Fuller test to check the unit root and stationarity in these two time series. The paper finds that the two time series are nonstationary but stationary at their first difference levels. The Augmented Engle-Granger test and the Cointegrating Regression Durbin-Watson test imply that the two time series of consumption and income are cointegrated and that long-run marginal propensity to consume is 0.88 which is given by the estimated (static) equilibrium relation. The paper also used the error correction mechanism to find out to model dynamic relationship. The purpose of the ECM is to indicate the speed of adjustment from the short-run equilibrium to the long-run equilibrium state. The results show that MPC is equal to 0.93 and is highly significant. The coefficient of Engle-Granger residuals is negative but insignificant. Statistically, the equilibrium error term is zero, which suggests that consumption adjusts to changes in GDP in the same period. The short-run changes in GDP have a positive impact on short-run changes in consumption. The paper concludes that we may interpret 0.93 as the short-run MPC. The pair-wise Granger Causality test shows that both GDP and consumption Granger cause each other.Keywords: cointegrating regression, Augmented Dickey Fuller test, Augmented Engle-Granger test, Granger causality, error correction mechanism
Procedia PDF Downloads 42118343 Wind Speed Data Analysis in Colombia in 2013 and 2015
Authors: Harold P. Villota, Alejandro Osorio B.
Abstract:
The energy meteorology is an area for study energy complementarity and the use of renewable sources in interconnected systems. Due to diversify the energy matrix in Colombia with wind sources, is necessary to know the data bases about this one. However, the time series given by 260 automatic weather stations have empty, and no apply data, so the purpose is to fill the time series selecting two years to characterize, impute and use like base to complete the data between 2005 and 2020.Keywords: complementarity, wind speed, renewable, colombia, characteri, characterization, imputation
Procedia PDF Downloads 16818342 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn
Abstract:
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics
Procedia PDF Downloads 16218341 Study on an Integrated Real-Time Sensor in Droplet-Based Microfluidics
Authors: Tien-Li Chang, Huang-Chi Huang, Zhao-Chi Chen, Wun-Yi Chen
Abstract:
The droplet-based microfluidic are used as micro-reactors for chemical and biological assays. Hence, the precise addition of reagents into the droplets is essential for this function in the scope of lab-on-a-chip applications. To obtain the characteristics (size, velocity, pressure, and frequency of production) of droplets, this study describes an integrated on-chip method of real-time signal detection. By controlling and manipulating the fluids, the flow behavior can be obtained in the droplet-based microfluidics. The detection method is used a type of infrared sensor. Through the varieties of droplets in the microfluidic devices, the real-time conditions of velocity and pressure are gained from the sensors. Here the microfluidic devices are fabricated by polydimethylsiloxane (PDMS). To measure the droplets, the signal acquisition of sensor and LabVIEW program control must be established in the microchannel devices. The devices can generate the different size droplets where the flow rate of oil phase is fixed 30 μl/hr and the flow rates of water phase range are from 20 μl/hr to 80 μl/hr. The experimental results demonstrate that the sensors are able to measure the time difference of droplets under the different velocity at the voltage from 0 V to 2 V. Consequently, the droplets are measured the fastest speed of 1.6 mm/s and related flow behaviors that can be helpful to develop and integrate the practical microfluidic applications.Keywords: microfluidic, droplets, sensors, single detection
Procedia PDF Downloads 49718340 Data Quality Enhancement with String Length Distribution
Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda
Abstract:
Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.Keywords: string classification, data quality, feature selection, probability distribution, string length
Procedia PDF Downloads 32118339 Comparative Study of Outcomes of Nonfixation of Mesh versus Fixation in Laparoscopic Total Extra Peritoneal (TEP) Repair of Inguinal Hernia: A Prospective Randomized Controlled Trial
Authors: Raman Sharma, S. K. Jain
Abstract:
Aims and Objectives: Fixation of the mesh during laparoscopic total extraperitoneal (TEP) repair of inguinal hernia is thought to be necessary to prevent recurrence. However, mesh fixation may increase surgical complications and postoperative pain. Our objective was to compare the outcomes of nonfixation with fixation of polypropylene mesh by metal tacks during TEP repair of inguinal hernia. Methods: Forty patients aged 18 to72 years with inguinal hernia were included who underwent laparoscopic TEP repair of inguinal hernia with (n=20) or without (n=20) fixation of the mesh. The outcomes were operative duration, postoperative pain score, cost, in-hospital stay, time to return to normal activity, and complications. Results: Patients in whom the mesh was not fixed had shorter mean operating time (p < 0.05). We found no difference between groups in the postoperative pain score, incidence of recurrence, in-hospital stay, time to return to normal activity and complications (P > 0.05). Moreover, a net cost savings was realized for each hernia repair performed without stapled mesh. Conclusions: TEP repair without mesh fixation resulted in the shorter operating time and lower operative cost with no difference between groups in the postoperative pain score, incidence of recurrence, in-hospital stay, time to return to normal activity and complications. All this contribute to make TEP repair without mesh fixation a better choice for repair of uncomplicated inguinal hernia, especially in developing nations with scarce resources.Keywords: postoperative pain score, inguinal hernia, nonfixation of mesh, total extra peritoneal (TEP)
Procedia PDF Downloads 34818338 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics
Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez
Abstract:
In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.Keywords: data analysis, emotional domotics, performance improvement, neural network
Procedia PDF Downloads 14518337 Development of an Automatic Monitoring System Based on the Open Architecture Concept
Authors: Andrii Biloshchytskyi, Serik Omirbayev, Alexandr Neftissov, Sapar Toxanov, Svitlana Biloshchytska, Adil Faizullin
Abstract:
Kazakhstan has adopted a carbon neutrality strategy until 2060. In accordance with this strategy, it is necessary to introduce various tools to maintain the environmental safety of the environment. The use of IoT, in combination with the characteristics and requirements of Kazakhstan's environmental legislation, makes it possible to develop a modern environmental monitoring system. The article proposes a solution for developing an example of an automated system for the continuous collection of data on the concentration of pollutants in the atmosphere based on an open architecture. The Audino-based device acts as a microcontroller. It should be noted that the transmission of measured values is carried out via an open wireless communication protocol. The architecture of the system, which was used to build a prototype based on sensors, an Arduino microcontroller, and a wireless data transmission module, is presented. The selection of elementary components may change depending on the requirements of the system; the introduction of new units is limited by the number of ports. The openness of solutions allows you to change the configuration depending on the conditions. The advantages of the solutions are openness, low cost, versatility and mobility. However, there is no comparison of the working processes of the proposed solution with traditional ones.Keywords: environmental monitoring, greenhouse gases emissions, environmental pollution, Industry 4.0, IoT, microcontroller, automated monitoring system.
Procedia PDF Downloads 5518336 Quality Control of 99mTc-Labeled Radiopharmaceuticals Using the Chromatography Strips
Authors: Yasuyuki Takahashi, Akemi Yoshida, Hirotaka Shimada
Abstract:
99mTc-2-methoxy-isobutyl-isonitrile (MIBI) and 99mTcmercaptoacetylgylcylglycyl-glycine (MAG3 ) are heat to 368-372K and are labeled with 99mTc-pertechnetate. Quality control (QC) of 99mTc-labeled radiopharmaceuticals is performed at hospitals, using liquid chromatography, which is difficult to perform in general hospitals. We used chromatography strips to simplify QC and investigated the effects of the test procedures on quality control. In this study is 99mTc- MAG3. Solvent using chloroform + acetone + tetrahydrofuran, and the gamma counter was ARC-380CL. The changed conditions are as follows; heating temperature, resting time after labeled, and expiration year for use: which were 293, 313, 333, 353 and 372K; 15 min (293K and 372K) and 1 hour (293K); and 2011, 2012, 2013, 2014 and 2015 respectively were tested. Measurement time using the gamma counter was one minute. A nuclear medical clinician decided the quality of the preparation in judging the usability of the retest agent. Two people conducted the test procedure twice, in order to compare reproducibility. The percentage of radiochemical purity (% RCP) was approximately 50% under insufficient heat treatment, which improved as the temperature and heating time increased. Moreover, the % RCP improved with time even under low temperatures. Furthermore, there was no deterioration with time after the expiration date. The objective of these tests was to determine soluble 99mTc impurities, including 99mTc-pertechnetate and the hydrolyzed-reduced 99mTc. Therefore, we assumed that insufficient heating and heating to operational errors in the labeling. It is concluded that quality control is a necessary procedure in nuclear medicine to ensure safe scanning. It is suggested that labeling is necessary to identify specifications.Keywords: quality control, tc-99m labeled radio-pharmaceutical, chromatography strip, nuclear medicine
Procedia PDF Downloads 32518335 Adaptive Optimal Controller for Uncertain Inverted Pendulum System: A Dynamic Programming Approach for Continuous Time System
Authors: Dao Phuong Nam, Tran Van Tuyen, Do Trong Tan, Bui Minh Dinh, Nguyen Van Huong
Abstract:
In this paper, we investigate the adaptive optimal control law for continuous-time systems with input disturbances and unknown parameters. This paper extends previous works to obtain the robust control law of uncertain systems. Through theoretical analysis, an adaptive dynamic programming (ADP) based optimal control is proposed to stabilize the closed-loop system and ensure the convergence properties of proposed iterative algorithm. Moreover, the global asymptotic stability (GAS) for closed system is also analyzed. The theoretical analysis for continuous-time systems and simulation results demonstrate the performance of the proposed algorithm for an inverted pendulum system.Keywords: approximate/adaptive dynamic programming, ADP, adaptive optimal control law, input state stability, ISS, inverted pendulum
Procedia PDF Downloads 19718334 Urinary Neutrophil Gelatinase Associated Lipocalin as Diagnostic Biomarkers for Lupus Nephritis
Authors: Lorena GóMez Escorcia, Gustavo Aroca MartíNez, Jose Luiz Villarreal, Elkin Navarro Quiroz
Abstract:
Lupus nephritis (LN) is a high-cost disease, occurring in about half of patients with Systemic Lupus Erythematosus (SLE). Renal biopsy constitutes the only protocol that, to date, allows a correct diagnosis of the level of renal involvement in these patients. However, this procedure can have various adverse effects such as kidney bleeding, muscle bleeding, infection, pain, among others. Therefore, the development of new diagnostic alternatives is required. The neutrophil gelatinase-associated lipocalin (NGAL) has been emerging as a novel biomarker of acute kidney injury. The aim of this study was to assess urinary NGAL levels as a marker for disease activity in patients with lupus nephritis. For this work included 50 systemic lupus erythematosus (SLE) patients, 50 with active lupus nephritis (LN), and 50 without autoimmune and renal disease as controls. TNGAL in urine samples was measured by enzyme-linked immunosorbent assay (ELISA). The results revealed that patients with kidney damage had an elevated urinary NGAL as compared to patients with lupus without kidney damage and controls (p <0.005), and the mean of uNGAL was (28.72 ± 4.53), (19.51 ± 4.72), (8.91 ± 3.37) respectively. Measurement of urinary NGAL levels showed a very good diagnostic performance for discriminating patients with Lupus nephritis from SLE without renal damage and of control individuals.Keywords: lupus nephritis, biomarker, NGAL, urine samples
Procedia PDF Downloads 21318333 Unsteady 3D Post-Stall Aerodynamics Accounting for Effective Loss in Camber Due to Flow Separation
Authors: Aritras Roy, Rinku Mukherjee
Abstract:
The current study couples a quasi-steady Vortex Lattice Method and a camber correcting technique, ‘Decambering’ for unsteady post-stall flow prediction. The wake is force-free and discrete such that the wake lattices move with the free-stream once shed from the wing. It is observed that the time-averaged unsteady coefficient of lift sees a relative drop at post-stall angles of attack in comparison to its steady counterpart for some angles of attack. Multiple solutions occur at post-stall and three different algorithms to choose solutions in these regimes show both unsteadiness and non-convergence of the iterations. The distribution of coefficient of lift on the wing span also shows sawtooth. Distribution of vorticity changes both along span and in the direction of the free-stream as the wake develops over time with distinct roll-up, which increases with time.Keywords: post-stall, unsteady, wing, aerodynamics
Procedia PDF Downloads 37318332 An Analysis of African Solutions to African Problems: Practical Effects of International Criminal Court Withdrawals in Favour of Regional Court Systems
Authors: Jeanne-Mari Retief
Abstract:
As of November 2016, three African states have withdrawn from the International Criminal Court (ICC) and more are expected to follow. The alleged abuse of universal jurisdiction and targeting of African states by the ICC motivated the withdrawals. These historical exits raise many questions, especially in regard to the adequate investigation and prosecution of international crimes in a continent with a history of impunity. Even though African courts exist and one more is proposed, many issues remain i.e. adequate access to the courts, the extent of the courts’ jurisdiction, and proposed methods of effectively dealing with international crimes in Africa. This paper seeks to address the practical effects of the withdrawal from the ICC and the problems posed through utilizing regional courts. It will specifically look at the practical challenges existing courts face, the lack of access to the latter, issues concerning the proposed African Court for Justice and Human Rights, and the shocking promotion of impunity in Africa. These all have severe implications for African citizens and victims of the most heinous crimes. The mantra of African solutions to African problems places an important duty on states to ensure the actual provision of these solutions, which can only be achieved through a critical analysis of the questions above.Keywords: ACJHR, Africa, impunity, justice, Malabo protocol
Procedia PDF Downloads 22718331 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 2518330 Study on Compressive Strength and Setting Time of Fly Ash Concrete after Slump Recovery Using Superplasticizer
Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert
Abstract:
Fresh concrete that is on bound to be rejected due to belated use either from delay construction process or unflavored traffic cause delay on concrete delivering can recover the slump and use once again by introduce second dose of superplasticizer(naphthalene based type F) into system. By adding superplasticizer as solution for recover unusable slump loss concrete may affects other concrete properties. Therefore, this paper was observed setting time and compressive strength of concrete after being re-dose with chemical admixture type F (superplasticizer, naphthalene based) for slump recovery. The concrete used in this study was fly ash concrete with fly ash replacement of 0%, 30% and 50% respectively. Concrete mix designed for test specimen was prepared with paste content (ratio of volume of cement to volume of void in the aggregate) of 1.2 and 1.3, water-to-binder ratio (w/b) range of 0.3 to 0.58, initial dose of superplasticizer (SP) range from 0.5 to 1.6%. The setting time of concrete were tested both before and after re-dosed with different amount of second dose and time of dosing. The research was concluded that addition of second dose of superplasticizer would increase both initial and final setting times accordingly to dosage of addition. As for fly ash concrete, the prolongation effect was higher as the replacement of fly ash is increase. The prolongation effect can reach up to maximum about 4 hours. In case of compressive strength, the re-dosed concrete has strength fluctuation within acceptable range of ±10%.Keywords: compressive strength, fly ash concrete, second dose of superplasticizer, setting times
Procedia PDF Downloads 28418329 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis
Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov
Abstract:
The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.Keywords: acoustic model, direction of arrival, inverse source problem, sound localization, urban noises
Procedia PDF Downloads 6518328 A Case Study on Utility of 18FDG-PET/CT Scan in Identifying Active Extra Lymph Nodes and Staging of Breast Cancer
Authors: Farid Risheq, M. Zaid Alrisheq, Shuaa Al-Sadoon, Karim Al-Faqih, Mays Abdulazeez
Abstract:
Breast cancer is the most frequently diagnosed cancer worldwide, and a common cause of death among women. Various conventional anatomical imaging tools are utilized for diagnosis, histological assessment and TNM (Tumor, Node, Metastases) staging of breast cancer. Biopsy of sentinel lymph node is becoming an alternative to the axillary lymph node dissection. Advances in 18-Fluoro-Deoxi-Glucose Positron Emission Tomography/Computed Tomography (18FDG-PET/CT) imaging have facilitated breast cancer diagnosis utilizing biological trapping of 18FDG inside lesion cells, expressed as Standardized Uptake Value (SUVmax). Objective: To present the utility of 18FDG uptake PET/CT scans in detecting active extra lymph nodes and distant occult metastases for breast cancer staging. Subjects and Methods: Four female patients were presented with initially classified TNM stages of breast cancer based on conventional anatomical diagnostic techniques. 18FDG-PET/CT scans were performed one hour post 18FDG intra-venous injection of (300-370) MBq, and (7-8) bed/130sec. Transverse, sagittal, and coronal views; fused PET/CT and MIP modality were reconstructed for each patient. Results: A total of twenty four lesions in breast, extended lesions to lung, liver, bone and active extra lymph nodes were detected among patients. The initial TNM stage was significantly changed post 18FDG-PET/CT scan for each patient, as follows: Patient-1: Initial TNM-stage: T1N1M0-(stage I). Finding: Two lesions in right breast (3.2cm2, SUVmax=10.2), (1.8cm2, SUVmax=6.7), associated with metastases to two right axillary lymph nodes. Final TNM-stage: T1N2M0-(stage II). Patient-2: Initial TNM-stage: T2N2M0-(stage III). Finding: Right breast lesion (6.1cm2, SUVmax=15.2), associated with metastases to right internal mammary lymph node, two right axillary lymph nodes, and sclerotic lesions in right scapula. Final TNM-stage: T2N3M1-(stage IV). Patient-3: Initial TNM-stage: T2N0M1-(stage III). Finding: Left breast lesion (11.1cm2, SUVmax=18.8), associated with metastases to two lymph nodes in left hilum, and three lesions in both lungs. Final TNM-stage: T2N2M1-(stage IV). Patient-4: Initial TNM-stage: T4N1M1-(stage III). Finding: Four lesions in upper outer quadrant area of right breast (largest: 12.7cm2, SUVmax=18.6), in addition to one lesion in left breast (4.8cm2, SUVmax=7.1), associated with metastases to multiple lesions in liver (largest: 11.4cm2, SUV=8.0), and two bony-lytic lesions in left scapula and cervicle-1. No evidence of regional or distant lymph node involvement. Final TNM-stage: T4N0M2-(stage IV). Conclusions: Our results demonstrated that 18FDG-PET/CT scans had significantly changed the TNM stages of breast cancer patients. While the T factor was unchanged, N and M factors showed significant variations. A single session of PET/CT scan was effective in detecting active extra lymph nodes and distant occult metastases, which were not identified by conventional diagnostic techniques, and might advantageously replace bone scan, and contrast enhanced CT of chest, abdomen and pelvis. Applying 18FDG-PET/CT scan early in the investigation, might shorten diagnosis time, helps deciding adequate treatment protocol, and could improve patients’ quality of life and survival. Trapping of 18FDG in malignant lesion cells, after a PET/CT scan, increases the retention index (RI%) for a considerable time, which might help localize sentinel lymph node for biopsy using a hand held gamma probe detector. Future work is required to demonstrate its utility.Keywords: axillary lymph nodes, breast cancer staging, fluorodeoxyglucose positron emission tomography/computed tomography, lymph nodes
Procedia PDF Downloads 31618327 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea
Authors: K. S. Sreejith, C. Shaji
Abstract:
Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis
Procedia PDF Downloads 27918326 Impacts of Climate Elements on the Annual Periodic Behavior of the Shallow Groundwater Level: Case Study from Central-Eastern Europe
Authors: Tamas Garamhegyi, Jozsef Kovacs, Rita Pongracz, Peter Tanos, Balazs Trasy, Norbert Magyar, Istvan G. Hatvani
Abstract:
Like most environmental processes, shallow groundwater fluctuation under natural circumstances also behaves periodically. With the statistical tools at hand, it can easily be determined if a period exists in the data or not. Thus, the question may be raised: Does the estimated average period time characterize the whole time period, or not? This is especially important in the case of such complex phenomena as shallow groundwater fluctuation, driven by numerous factors. Because of the continuous changes in the oscillating components of shallow groundwater time series, the most appropriate method should be used to investigate its periodicity, this is wavelet spectrum analysis. The aims of the research were to investigate the periodic behavior of the shallow groundwater time series of an agriculturally important and drought sensitive region in Central-Eastern Europe and its relationship to the European pressure action centers. During the research ~216 shallow groundwater observation wells located in the eastern part of the Great Hungarian Plain with a temporal coverage of 50 years were scanned for periodicity. By taking the full-time interval as 100%, the presence of any period could be determined in percentages. With the complex hydrogeological/meteorological model developed in this study, non-periodic time intervals were found in the shallow groundwater levels. On the local scale, this phenomenon linked to drought conditions, and on a regional scale linked to the maxima of the regional air pressures in the Gulf of Genoa. The study documented an important link between shallow groundwater levels and climate variables/indices facilitating the necessary adaptation strategies on national and/or regional scales, which have to take into account the predictions of drought-related climatic conditions.Keywords: climate change, drought, groundwater periodicity, wavelet spectrum and coherence analyses
Procedia PDF Downloads 38918325 Adaption Model for Building Agile Pronunciation Dictionaries Using Phonemic Distance Measurements
Authors: Akella Amarendra Babu, Rama Devi Yellasiri, Natukula Sainath
Abstract:
Where human beings can easily learn and adopt pronunciation variations, machines need training before put into use. Also humans keep minimum vocabulary and their pronunciation variations are stored in front-end of their memory for ready reference, while machines keep the entire pronunciation dictionary for ready reference. Supervised methods are used for preparation of pronunciation dictionaries which take large amounts of manual effort, cost, time and are not suitable for real time use. This paper presents an unsupervised adaptation model for building agile and dynamic pronunciation dictionaries online. These methods mimic human approach in learning the new pronunciations in real time. A new algorithm for measuring sound distances called Dynamic Phone Warping is presented and tested. Performance of the system is measured using an adaptation model and the precision metrics is found to be better than 86 percent.Keywords: pronunciation variations, dynamic programming, machine learning, natural language processing
Procedia PDF Downloads 18218324 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 9718323 The Gaps of Environmental Criminal Liability in Armed Conflicts and Its Consequences: An Analysis under Stockholm, Geneva and Rome
Authors: Vivian Caroline Koerbel Dombrowski
Abstract:
Armed conflicts have always meant the ultimate expression of power and at the same time, lack of understanding among nations. Cities were destroyed, people were killed, assets were devastated. But these are not only the loss of a war: the environmental damage comes to be considered immeasurable losses in the short, medium and long term. And this is because no nation wants to bear that cost. They invest in military equipment, training, technical equipment but the environmental account yet finds gaps in international law. Considering such a generalization in rights protection, many nations are at imminent danger in a conflict if the water will be used as a mass weapon, especially if we consider important rivers such as Jordan, Euphrates and Nile. The top three international documents were analyzed on the subject: the Stockholm Convention (1972), Additional Protocol I to the Geneva Convention (1977) and the Rome Statute (1998). Indeed, some references are researched in doctrine, especially scientific articles, to substantiate with consistent data about the extent of the damage, historical factors and decisions which have been successful. However, due to the lack of literature about this subject, the research tends to be exhaustive. From the study of the indicated material, it was noted that international law - humanitarian and environmental - calls in some of its instruments the environmental protection in war conflicts, but they are generic and vague rules that do not define exactly what is the environmental damage , nor sets standards for measure them. Taking into account the mains conflicts of the century XX: World War II, the Vietnam War and the Gulf War, one must realize that the environmental consequences were of great rides - never deactivated landmines, buried nuclear weapons, armaments and munitions destroyed in the soil, chemical weapons, not to mention the effects of some weapons when used (uranium, agent Orange, etc). Extending the search for more recent conflicts such as Afghanistan, it is proven that the effects on health of the civilian population were catastrophic: cancer, birth defects, and deformities in newborns. There are few reports of nations that, somehow, repaired the damage caused to the environment as a result of the conflict. In the pitch of contemporary conflicts, many nations fear that water resources are used as weapons of mass destruction, because once contaminated - directly or indirectly - can become a means of disguised genocide side effect of military objective. In conclusion, it appears that the main international treaties governing the subject mention the concern for environmental protection, however leave the normative specifications vacancies necessary to effectively there is a prevention of environmental damage in armed conflict and, should they occur, the repair of the same. Still, it appears that there is no protection mechanism to safeguard natural resources and avoid them to become a mass destruction weapon.Keywords: armed conflicts, criminal liability, environmental damages, humanitarian law, mass weapon
Procedia PDF Downloads 42418322 Biosorption Kinetics, Isotherms, and Thermodynamic Studies of Copper (II) on Spirogyra sp.
Authors: Diwan Singh
Abstract:
The ability of non-living Spirogyra sp. biomass for biosorption of copper(II) ions from aqueous solutions was explored. The effect of contact time, pH, initial copper ion concentration, biosorbent dosage and temperature were investigated in batch experiments. Both the Freundlich and Langmuir Isotherms were found applicable on the experimental data (R2>0.98). Qmax obtained from the Langmuir Isotherms was found to be 28.7 mg/g of biomass. The values of Gibbs free energy (ΔGº) and enthalpy change (ΔHº) suggest that the sorption is spontaneous and endothermic at 20ºC-40ºC.Keywords: biosorption, Spirogyra sp., contact time, pH, dose
Procedia PDF Downloads 43118321 On Four Models of a Three Server Queue with Optional Server Vacations
Authors: Kailash C. Madan
Abstract:
We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.Keywords: a three server queue, Bernoulli schedule server vacations, queue size distribution at a random epoch, steady state
Procedia PDF Downloads 30218320 Time Series Regression with Meta-Clusters
Authors: Monika Chuchro
Abstract:
This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain a subgroups of time series data with normal distribution from inflow into waste water treatment plant data which Composed of several groups differing by mean value. Two simple algorithms: K-mean and EM were chosen as a clustering method. The rand index was used to measure the similarity. After simple meta-clustering, regression model was performed for each subgroups. The final model was a sum of subgroups models. The quality of obtained model was compared with the regression model made using the same explanatory variables but with no clustering of data. Results were compared by determination coefficient (R2), measure of prediction accuracy mean absolute percentage error (MAPE) and comparison on linear chart. Preliminary results allows to foresee the potential of the presented technique.Keywords: clustering, data analysis, data mining, predictive models
Procedia PDF Downloads 46918319 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 16218318 Impact of Varying Malting and Fermentation Durations on Specific Chemical, Functional Properties, and Microstructural Behaviour of Pearl Millet and Sorghum Flour Using Response Surface Methodology
Authors: G. Olamiti; TK. Takalani; D. Beswa, AIO Jideani
Abstract:
The study investigated the effects of malting and fermentation times on some chemical, functional properties and microstructural behaviour of Agrigreen, Babala pearl millet cultivars and sorghum flours using response surface methodology (RSM). Central Composite Rotatable Design (CCRD) was performed on two independent variables: malting and fermentation times (h), at intervals of 24, 48, and 72, respectively. The results of dependent parameters such as pH, titratable acidity (TTA), Water absorption capacity (WAC), Oil absorption capacity (OAC), bulk density (BD), dispersibility and microstructural behaviour of the flours studied showed a significant difference in p < 0.05 upon malting and fermentation time. Babala flour exhibited a higher pH value at 4.78 at 48 h malted and 81.9 fermentation times. Agrigreen flour showed a higher TTA value at 0.159% at 81.94 h malted and 48 h fermentation times. WAC content was also higher in malted and fermented Babala flour at 2.37 ml g-1 for 81.94 h malted and 48 h fermentation time. Sorghum flour exhibited the least OAC content at 1.67 ml g-1 at 14 h malted and 48 h fermentation times. Agrigreen flour recorded the least bulk density, at 0.53 g ml-1 for 72 h malted and 24 h fermentation time. Sorghum flour exhibited a higher content of dispersibility, at 56.34%, after 24 h malted and 72 h fermented time. The response surface plots showed that increased malting and fermentation time influenced the dependent parameters. The microstructure behaviour of malting and fermentation times of pearl millet varieties and sorghum flours showed isolated, oval, spherical, or polygonal to smooth surfaces. The optimal processing conditions, such as malting and fermentation time for Agrigreen, were 32.24 h and 63.32 h; 35.18 h and 34.58 h for Babala; and 36.75 h and 47.88 h for sorghum with high desirability of 1.00. The validation of the optimum processing malting and fermentation times (h) on the dependent improved the experimented values. Food processing companies can use the study's findings to improve food processing and quality.Keywords: Pearl millet, malting, fermentation, microstructural behaviour
Procedia PDF Downloads 82