Search results for: perceptual linear prediction (PLP’s)
1161 Constructing a Semi-Supervised Model for Network Intrusion Detection
Authors: Tigabu Dagne Akal
Abstract:
While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.Keywords: intrusion detection, data mining, computer science, data mining
Procedia PDF Downloads 2961160 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System
Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock
Abstract:
The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription-Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the Roche assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the Roche assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.Keywords: HIV viral load, Aptima, Roche, Panther system
Procedia PDF Downloads 3751159 Determining the Factors Affecting Social Media Addiction (Virtual Tolerance, Virtual Communication), Phubbing, and Perception of Addiction in Nurses
Authors: Fatima Zehra Allahverdi, Nukhet Bayer
Abstract:
Objective: Three questions were formulated to examine stressful working units (intensive care units, emergency unit nurses) utilizing the self-perception theory and social support theory. This study provides a distinctive input by inspecting the combination of variables regarding stressful working environments. Method: The descriptive research was conducted with the participation of 400 nurses working at Ankara City Hospital. The study used Multivariate Analysis of Variance (MANOVA), regression analysis, and a mediation model. Hypothesis one used MANOVA followed by a Scheffe post hoc test. Hypothesis two utilized regression analysis using a hierarchical linear regression model. Hypothesis three used a mediation model. Result: The study utilized mediation analyses. Findings supported the hypotheses that intensive care units have significantly high scores in virtual communication and virtual tolerance. The number of years on the job, virtual communication, virtual tolerance, and phubbing significantly predicted 51% of the variance of perception of addiction. Interestingly, the number of years on the job, while significant, was negatively related to perception of addiction. Conclusion: The reasoning behind these findings and the lack of significance in the emergency unit is discussed. Around 7% of the variance of phubbing was accounted for through working in intensive care units. The model accounted for 26.80 % of the differences in the perception of addiction.Keywords: phubbing, social media, working units, years on the job, stress
Procedia PDF Downloads 531158 Optimization of Municipal Solid Waste Management in Peshawar Using Mathematical Modelling and GIS with Focus on Incineration
Authors: Usman Jilani, Ibad Khurram, Irshad Hussain
Abstract:
Environmentally sustainable waste management is a challenging task as it involves multiple and diverse economic, environmental, technical and regulatory issues. Municipal Solid Waste Management (MSWM) is more challenging in developing countries like Pakistan due to lack of awareness, technology and human resources, insufficient funding, inefficient collection and transport mechanism resulting in the lack of a comprehensive waste management system. This work presents an overview of current MSWM practices in Peshawar, the provincial capital of Khyber Pakhtunkhwa, Pakistan and proposes a better and sustainable integrated solid waste management system with incineration (Waste to Energy) option. The diverted waste would otherwise generate revenue; minimize land fill requirement and negative impact on the environment. The proposed optimized solution utilizing scientific techniques (like mathematical modeling, optimization algorithms and GIS) as decision support tools enhances the technical & institutional efficiency leading towards a more sustainable waste management system through incorporating: - Improved collection mechanisms through optimized transportation / routing and, - Resource recovery through incineration and selection of most feasible sites for transfer stations, landfills and incineration plant. These proposed methods shift the linear waste management system towards a cyclic system and can also be used as a decision support tool by the WSSP (Water and Sanitation Services Peshawar), agency responsible for the MSWM in Peshawar.Keywords: municipal solid waste management, incineration, mathematical modeling, optimization, GIS, Peshawar
Procedia PDF Downloads 3761157 The Study of the Correlation of Future-Oriented Thinking and Retirement Planning: The Analysis of Two Professions
Authors: Ya-Hui Lee, Ching-Yi Lu, Chien Hung, Hsieh
Abstract:
The purpose of this study is to explore the difference between state-owned-enterprise employees and the civil servants regarding their future-oriented thinking and retirement planning. The researchers investigated 687 middle age and older adults (345 state-owned-enterprise employees and 342 civil servants) through survey research, to understand the relevance between and the prediction of their future-oriented thinking and retirement planning. The findings of this study are: 1.There are significant differences between these two professions regarding future-oriented thinking but not retirement planning. The results of the future-oriented thinking of civil servants are overall higher than that of the state-owned-enterprise employees. 2. There are significant differences both in the aspects of future-oriented thinking and retirement planning among civil servants of different ages. The future-oriented thinking and retirement planning of ages 55 and above are more significant than those of ages 45 or under. For the state-owned-enterprise employees, however, there is no significance found in their future-oriented thinking, but in their retirement planning. Moreover, retirement planning is higher at ages 55 or above than at other ages. 3. With regard to education, there is no correlation to future-oriented thinking or retirement planning for civil servants. For state-owned-enterprise employees, however, their levels of education directly affect their future-oriented thinking. Those with a master degree or above have greater future-oriented thinking than those with other educational degrees. As for retirement planning, there is no correlation. 4. Self-assessment of economic status significantly affects the future-oriented thinking and retirement planning of both civil servants and state-owned-enterprise employees. Those who assess themselves more affluently are more inclined to future-oriented thinking and retirement planning. 5. For civil servants, there are significant differences between their monthly income and retirement planning, but none with future-oriented thinking. As for state-owned-enterprise employees, there are significant differences between their monthly income and retirement planning as well as future-oriented thinking. State-owned-enterprise employees who have significantly higher monthly incomes (1,960 euros and above) have more significant future-oriented thinking and retirement planning than those with lower monthly incomes (1,469 euros and below). 6. The middle age and older adults of both professions have positive correlations with future-oriented thinking and retirement planning. Through stepwise multiple regression analysis, the results indicate that future-oriented thinking and retirement planning have positive predictions. The authors then present the findings of this study for state-owned-enterprises, public authorities, and older adult educational program designs in Taiwan as references.Keywords: state-owned-enterprise employees, civil servants, future-oriented thinking, retirement planning
Procedia PDF Downloads 3661156 Downregulation of Epidermal Growth Factor Receptor in Advanced Stage Laryngeal Squamous Cell Carcinoma
Authors: Sarocha Vivatvakin, Thanaporn Ratchataswan, Thiratest Leesutipornchai, Komkrit Ruangritchankul, Somboon Keelawat, Virachai Kerekhanjanarong, Patnarin Mahattanasakul, Saknan Bongsebandhu-Phubhakdi
Abstract:
In this globalization era, much attention has been drawn to various molecular biomarkers, which may have the potential to predict the progression of cancer. Epidermal growth factor receptor (EGFR) is the classic member of the ErbB family of membrane-associated intrinsic tyrosine kinase receptors. EGFR expression was found in several organs throughout the body as its roles involve in the regulation of cell proliferation, survival, and differentiation in normal physiologic conditions. However, anomalous expression, whether over- or under-expression is believed to be the underlying mechanism of pathologic conditions, including carcinogenesis. Even though numerous discussions regarding the EGFR as a prognostic tool in head and neck cancer have been established, the consensus has not yet been met. The aims of the present study are to assess the correlation between the level of EGFR expression and demographic data as well as clinicopathological features and to evaluate the ability of EGFR as a reliable prognostic marker. Furthermore, another aim of this study is to investigate the probable pathophysiology that explains the finding results. This retrospective study included 30 squamous cell laryngeal carcinoma patients treated at King Chulalongkorn Memorial Hospital from January 1, 2000, to December 31, 2004. EGFR expression level was observed to be significantly downregulated with the progression of the laryngeal cancer stage. (one way ANOVA, p = 0.001) A statistically significant lower EGFR expression in the late stage of the disease compared to the early stage was recorded. (unpaired t-test, p = 0.041) EGFR overexpression also showed the tendency to increase recurrence of cancer (unpaired t-test, p = 0.128). A significant downregulation of EGFR expression was documented in advanced stage laryngeal cancer. The results indicated that EGFR level correlates to prognosis in term of stage progression. Thus, EGFR expression might be used as a prevailing biomarker for laryngeal squamous cell carcinoma prognostic prediction.Keywords: downregulation, epidermal growth factor receptor, immunohistochemistry, laryngeal squamous cell carcinoma
Procedia PDF Downloads 1111155 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient
Procedia PDF Downloads 1351154 Optimization Technique for the Contractor’s Portfolio in the Bidding Process
Authors: Taha Anjamrooz, Sareh Rajabi, Salwa Bheiry
Abstract:
Selection between the available projects in bidding processes for the contractor is one of the essential areas to concentrate on. It is important for the contractor to choose the right projects within its portfolio during the tendering stage based on certain criteria. It should align the bidding process with its origination strategies and goals as a screening process to have the right portfolio pool to start with. Secondly, it should set the proper framework and use a suitable technique in order to optimize its selection process for concertation purpose and higher efforts during the tender stage with goals of success and winning. In this research paper, a two steps framework proposed to increase the efficiency of the contractor’s bidding process and the winning chance of getting the new projects awarded. In this framework, initially, all the projects pass through the first stage screening process, in which the portfolio basket will be evaluated and adjusted in accordance with the organization strategies to the reduced version of the portfolio pool, which is in line with organization activities. In the second stage, the contractor uses linear programming to optimize the portfolio pool based on available resources such as manpower, light equipment, heavy equipment, financial capability, return on investment, and success rate of winning the bid. Therefore, this optimization model will assist the contractor in utilizing its internal resource to its maximum and increase its winning chance for the new project considering past experience with clients, built-relation between two parties, and complexity in the exertion of the projects. The objective of this research will be to increase the contractor's winning chance in the bidding process based on the success rate and expected return on investment.Keywords: bidding process, internal resources, optimization, contracting portfolio management
Procedia PDF Downloads 1421153 The Influence of Microscopic Features on the Self-Cleaning Ability of Developed 3D Printed Fabric-Like Structures Using Different Printing Parameters
Authors: Ayat Adnan Atwah, Muhammad A. Khan
Abstract:
Self-cleaning surfaces are getting significant attention in industrial fields. Especially for textile fabrics, it is observed that self-cleaning textile fabric surfaces are created by manipulating the surface features with the help of coatings and nanoparticles, which are considered costly and far more complicated. However, controlling the fabrication parameters of textile fabrics at the microscopic level by exploring the potential for self-cleaning has not been addressed. This study aimed to establish the context of self-cleaning textile fabrics by controlling the fabrication parameters of the textile fabric at the microscopic level. Therefore, 3D-printed textile fabrics were fabricated using the low-cost fused filament fabrication (FFF) technique. The printing parameters, such as orientation angle (O), layer height (LH), and extruder width (EW), were used to control the microscopic features of the printed fabrics. The combination of three printing parameters was created to provide the best self-cleaning textile fabric surface: (LH) (0.15, 0.13, 0.10 mm) and (EW) (0.5, 0.4, 0.3 mm) along with two different (O) of (45º and 90º). Three different thermoplastic flexible filament materials were used: (TPU 98A), (TPE felaflex), and (TPC flex45). The printing parameters were optimised to get the optimum self-cleaning ability of the printed specimens. Furthermore, the impact of these characteristics on mechanical strength at the fabric-woven structure level was investigated. The study revealed that the printing parameters significantly affect the self-cleaning properties after adjusting the selected combination of layer height, extruder width, and printing orientation. A linear regression model was effectively developed to demonstrate the association between 3D printing parameters (layer height, extruder width, and orientation). According to the experimental results, (TPE felaflex) has a better self-cleaning ability than the other two materials.Keywords: 3D printing, self-cleaning fabric, microscopic features, printing parameters, fabrication
Procedia PDF Downloads 901152 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants
Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey
Abstract:
The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model
Procedia PDF Downloads 1431151 Fast Robust Switching Control Scheme for PWR-Type Nuclear Power Plants
Authors: Piyush V. Surjagade, Jiamei Deng, Paul Doney, S. R. Shimjith, A. John Arul
Abstract:
In sophisticated and complex systems such as nuclear power plants, maintaining the system's stability in the presence of uncertainties and disturbances and obtaining a fast dynamic response are the most challenging problems. Thus, to ensure the satisfactory and safe operation of nuclear power plants, this work proposes a new fast, robust optimal switching control strategy for pressurized water reactor-type nuclear power plants. The proposed control strategy guarantees a substantial degree of robustness, fast dynamic response over the entire operational envelope, and optimal performance during the nominal operation of the plant. To improve the robustness, obtain a fast dynamic response, and make the system optimal, a bank of controllers is designed. Various controllers, like a baseline proportional-integral-derivative controller, an optimal linear quadratic Gaussian controller, and a robust adaptive L1 controller, are designed to perform distinct tasks in a specific situation. At any instant of time, the most suitable controller from the bank of controllers is selected using the switching logic unit that designates the controller by monitoring the health of the nuclear power plant or transients. The proposed switching control strategy optimizes the overall performance and increases operational safety and efficiency. Simulation studies have been performed considering various uncertainties and disturbances that demonstrate the applicability and effectiveness of the proposed switching control strategy over some conventional control techniques.Keywords: switching control, robust control, optimal control, nuclear power control
Procedia PDF Downloads 1341150 Predictive Relationship between Motivation Strategies and Musical Creativity of Secondary School Music Students
Authors: Lucy Lugo Mawang
Abstract:
Educational Psychologists have highlighted the significance of creativity in education. Likewise, a fundamental objective of music education concern the development of students’ musical creativity potential. The purpose of this study was to determine the relationship between motivation strategies and musical creativity, and establish the prediction equation of musical creativity. The study used purposive sampling and census to select 201 fourth-form music students (139 females/ 62 males), mainly from public secondary schools in Kenya. The mean age of participants was 17.24 years (SD = .78). Framed upon self- determination theory and the dichotomous model of achievement motivation, the study adopted an ex post facto research design. A self-report measure, the Achievement Goal Questionnaire-Revised (AGQ-R) was used in data collection for the independent variable. Musical creativity was based on a creative music composition task and measured by the Consensual Musical Creativity Assessment Scale (CMCAS). Data collected in two separate sessions within an interval of one month. The questionnaire was administered in the first session, lasting approximately 20 minutes. The second session was for notation of participants’ creative composition. The results indicated a positive correlation r(199) = .39, p ˂ .01 between musical creativity and intrinsic music motivation. Conversely, negative correlation r(199) = -.19, p < .01 was observed between musical creativity and extrinsic music motivation. The equation for predicting musical creativity from music motivation strategies was significant F(2, 198) = 20.8, p < .01, with R2 = .17. Motivation strategies accounted for approximately (17%) of the variance in participants’ musical creativity. Intrinsic music motivation had the highest significant predictive value (β = .38, p ˂ .01) on musical creativity. In the exploratory analysis, a significant mean difference t(118) = 4.59, p ˂ .01 in musical creativity for intrinsic and extrinsic music motivation was observed in favour of intrinsically motivated participants. Further, a significant gender difference t(93.47) = 4.31, p ˂ .01 in musical creativity was observed, with male participants scoring higher than females. However, there was no significant difference in participants’ musical creativity based on age. The study recommended that music educators should strive to enhance intrinsic music motivation among students. Specifically, schools should create conducive environments and have interventions for the development of intrinsic music motivation since it is the most facilitative motivation strategy in predicting musical creativity.Keywords: extrinsic music motivation, intrinsic music motivation, musical creativity, music composition
Procedia PDF Downloads 1541149 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 1441148 The Impact of the Board of Directors’ Characteristics on Tax Aggressiveness in USA Companies
Authors: jihen ayadi sellami
Abstract:
The rapid evolution of the global financial landscape has led to increased attention to corporate tax policies and the need to understand the factors that influence their tax behavior. In order to mitigate any residual loss for shareholders resulting from tax aggressiveness and resolve the agency problem, appropriate systems that separate the function of management from that of controlling are needed. In this context of growing concerns to limit aggressive corporate taxation practices through governance, this study discusses. Its aims is to examine the influence of six key characteristics of the board of directors (board size, diligence, CEO duality, presence of audit committees, gender diversity and independence of directors), given a governance mechanism, on the tax decisions of non-financial corporations in the United State. In fact, using a sample of 90 non-financial US firms from S&P 500 over a period of 4 years going from 2014 to 2017, the results based on a multivariate linear regression highlight significant associations between these characteristics and corporate tax policy. Notably, larger board, gender diversity, diligence and increased director independence appear to play an important role in reducing aggressive taxation. While duality has a positive and significant correlation with tax aggressiveness, that can be explained by the fact that the manager did properly exploit his specific position within the company. These findings contribute to a deeper understanding of how board characteristics can influence corporate tax management, providing avenues for more effective corporate governance and more responsible tax decision-makingKeywords: tax aggressiveness, board of directors, board size, CEO duality, audit committees, gender diversity, director independence, diligence, corporate governance, united states
Procedia PDF Downloads 611147 Use of a Chagas Urine Nanoparticle Test (Chunap) to Correlate with Parasitemia Levels in T. cruzi/HIV Co-Infected Patients
Authors: Yagahira E. Castro-Sesquen, Robert H. Gilman, Carolina Mejia, Daniel E. Clark, Jeong Choi, Melissa J. Reimer-Mcatee, Rocio Castro, Jorge Flores, Edward Valencia-Ayala, Faustino Torrico, Ricardo Castillo-Neyra, Lance Liotta, Caryn Bern, Alessandra Luchini
Abstract:
Early diagnosis of reactivation of Chagas disease in HIV patients could be lifesaving; however, in Latin American the diagnosis is performed by detection of parasitemia by microscopy which lacks sensitivity. To evaluate if levels of T. cruzi antigens in urine determined by Chunap (Chagas urine nanoparticle test) are correlated with parasitemia levels in T. cruzi/HIV co-infected patients. T. cruzi antigens in urine of HIV patients (N=55: 31 T. cruzi infected and 24 T. cruzi serology negative) were concentrated using hydrogel particles and quantified by Western Blot and a calibration curve. The percentage of Chagas positive patients determined by Chunap compared to blood microscopy, qPCR, and ELISA was 100% (6/6), 95% (18/19) and 74% (23/31), respectively. Chunap specificity was 91.7%. Linear regression analysis demonstrated a direct relationship between parasitemia levels (determined by qPCR) and urine T. cruzi antigen concentrations (p<0.001). A cut-off of > 105 pg was chosen to determine patients with reactivation of Chagas disease (6/6). Urine antigen concentration was significantly higher among patients with CD4+ lymphocyte counts below 200/mL (p=0.045). Chunap shows potential for early detection of reactivation and with appropriate adaptation can be used for monitoring Chagas disease status in T. cruzi/HIV co-infected patients.Keywords: antigenuria, Chagas disease, Chunap, nanoparticles, parasitemia, poly N-isopropylacrylamide (NIPAm)/trypan blue particles (polyNIPAm/TB), reactivation of Chagas disease.
Procedia PDF Downloads 3771146 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle
Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad
Abstract:
Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.Keywords: angle, equal triangle, square, structural hierarchy
Procedia PDF Downloads 1951145 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria
Authors: R. Santos Alimi
Abstract:
Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.Keywords: public expenditure, economic growth, optimum level, fully modified OLS
Procedia PDF Downloads 4201144 Electrochemical Detection of the Chemotherapy Agent Methotrexate in vitro from Physiological Fluids Using Functionalized Carbon Nanotube past Electrodes
Authors: Shekher Kummari, V. Sunil Kumar, K. Vengatajalabathy Gobi
Abstract:
A simple, cost-effective, reusable and reagent-free electrochemical biosensor is developed with functionalized multiwall carbon nanotube paste electrode (f-CNTPE) for the sensitive and selective determination of the important chemotherapeutic drug methotrexate (MTX), which is widely used for the treatment of various cancer and autoimmune diseases. The electrochemical response of the fabricated electrode towards the detection of MTX is examined by cyclic voltammetry (CV), differential pulse voltammetry (DPV) and square wave voltammetry (SWV). CV studies have shown that f-CNTPE electrode system exhibited an excellent electrocatalytic activity towards the oxidation of MTX in phosphate buffer (0.2 M) compared with a conventional carbon paste electrode (CPE). The oxidation peak current is enhanced by nearly two times in magnitude. Applying the DPV method under optimized conditions, a linear calibration plot is achieved over a wide range of concentration from 4.0×10⁻⁷ M to 5.5×10⁻⁶ M with the detection limit 1.6×10⁻⁷ M. further, by applying the SWV method a parabolic calibration plot was achieved starting from a very low concentration of 1.0×10⁻⁸ M, and the sensor could detect as low as 2.9×10⁻⁹ M MTX in 10 s and 10 nM were detected in steady state current-time analysis. The f-CNTPE shows very good selectivity towards the specific recognition of MTX in the presence of important biological interference. The electrochemical biosensor detects MTX in-vitro directly from pharmaceutical sample, undiluted urine and human blood serum samples at a concentration range 5.0×10⁻⁷ M with good recovery limits.Keywords: amperometry, electrochemical detection, human blood serum, methotrexate, MWCNT, SWV
Procedia PDF Downloads 3091143 Comparison of Mini-BESTest versus Berg Balance Scale to Evaluate Balance Disorders in Parkinson's Disease
Authors: R. Harihara Prakash, Shweta R. Parikh, Sangna S. Sheth
Abstract:
The purpose of this study was to explore the usefulness of the Mini-BESTest compared to the Berg Balance Scale in evaluating balance in people with Parkinson's Disease (PD) of varying severity. Evaluation were done to obtain (1) the distribution of patients scores to look for ceiling effects, (2) concurrent validity with severity of disease, and (3) the sensitivity & specificity of separating people with or without postural response deficits. Methods and Material: Seventy-seven(77) people with Parkinson's Disease were tested for balance deficits using the Berg Balance Scale, Mini-BESTest. Unified Parkinson’s Disease Rating Scale (UPDRS) III and the Hoehn & Yahr (H&Y) disease severity scales were used for classification. Materials used in this study were case record sheet, chair without arm rests or wheels, Incline ramp, stopwatch, a box, 3 meter distance measured out and marked on the floor with tape [from chair]. Statistical analysis used: Multiple Linear regression was carried out of UPDRS jointly on the two scores for the Berg and Mini-BESTest. Receiver operating characteristic curves for classifying people into two groups based on a threshold for the H&Y score, to discriminate between mild PD versus more severe PD.Correlation co-efficient to find relativeness between the two variables. Results: The Mini-BESTest is highly correlated with the Berg (r = 0.732,P < 0.001), but avoids the ceiling compression effect of the Berg for mild PD (skewness −0.714 Berg, −0.512 Mini-BESTest). Consequently, the Mini-BESTest is more effective than the Berg for predicting UPDRS Motor score (P < 0.001 Mini-BESTest versus P = 0.72 Berg), and for discriminating between those with and without postural response deficits as measured by the H&Y (ROC).Keywords: balance, berg balance scale, MINI BESTest, parkinson's disease
Procedia PDF Downloads 3941142 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products
Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad
Abstract:
The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.Keywords: Alumina-Coated magnetite nanoparticles, Magnetic Mixed Hemimicell Solid-Phase Extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample
Procedia PDF Downloads 2921141 Magnetohydrodynamic Flow of Viscoelastic Nanofluid and Heat Transfer over a Stretching Surface with Non-Uniform Heat Source/Sink and Non-Linear Radiation
Authors: Md. S. Ansari, S. S. Motsa
Abstract:
In this paper, an analysis has been made on the flow of non-Newtonian viscoelastic nanofluid over a linearly stretching sheet under the influence of uniform magnetic field. Heat transfer characteristics is analyzed taking into the effect of nonlinear radiation and non-uniform heat source/sink. Transport equations contain the simultaneous effects of Brownian motion and thermophoretic diffusion of nanoparticles. The relevant partial differential equations are non-dimensionalized and transformed into ordinary differential equations by using appropriate similarity transformations. The transformed, highly nonlinear, ordinary differential equations are solved by spectral local linearisation method. The numerical convergence, error and stability analysis of iteration schemes are presented. The effects of different controlling parameters, namely, radiation, space and temperature-dependent heat source/sink, Brownian motion, thermophoresis, viscoelastic, Lewis number and the magnetic force parameter on the flow field, heat transfer characteristics and nanoparticles concentration are examined. The present investigation has many industrial and engineering applications in the fields of coatings and suspensions, cooling of metallic plates, oils and grease, paper production, coal water or coal–oil slurries, heat exchangers’ technology, and materials’ processing and exploiting.Keywords: magnetic field, nonlinear radiation, non-uniform heat source/sink, similar solution, spectral local linearisation method, Rosseland diffusion approximation
Procedia PDF Downloads 3721140 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication
Authors: Vedant Janapaty
Abstract:
Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.Keywords: estuary, remote sensing, machine learning, Fourier transform
Procedia PDF Downloads 1041139 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia
Authors: Zeinu Ahmed Rabba, Derek D Stretch
Abstract:
Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase
Procedia PDF Downloads 2861138 Architecture - Performance Relationship in GPU Computing - Composite Process Flow Modeling and Simulations
Authors: Ram Mohan, Richard Haney, Ajit Kelkar
Abstract:
Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. This research and technical presentation will focus on the analysis and understanding of the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines.Keywords: graphical processing unit, software development and engineering, performance analysis, system architecture and software performance
Procedia PDF Downloads 3631137 Study on Control Techniques for Adaptive Impact Mitigation
Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty
Abstract:
Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber
Procedia PDF Downloads 911136 Impure CO₂ Solubility Trapping in Deep Saline Aquifers: Role of Operating Conditions
Authors: Seyed Mostafa Jafari Raad, Hassan Hassanzadeh
Abstract:
Injection of impurities along with CO₂ into saline aquifers provides an exceptional prospect for low-cost carbon capture and storage technologies and can potentially accelerate large-scale implementation of geological storage of CO₂. We have conducted linear stability analyses and numerical simulations to investigate the effects of permitted impurities in CO₂ streams on the onset of natural convection and dynamics of subsequent convective mixing. We have shown that the rate of dissolution of an impure CO₂ stream with H₂S highly depends on the operating conditions such as temperature, pressure, and composition of impurity. Contrary to findings of previous studies, our results show that an impurity such as H₂S can potentially reduce the onset time of natural convection and can accelerate the subsequent convective mixing. However, at the later times, the rate of convective dissolution is adversely affected by the impurities. Therefore, the injection of an impure CO₂ stream can be engineered to improve the rate of dissolution of CO₂, which leads to higher storage security and efficiency. Accordingly, we have identified the most favorable CO₂ stream compositions based on the geophysical properties of target aquifers. Information related to the onset of natural convection such as the scaling relations and the most favorable operating conditions for CO₂ storage developed in this study are important in proper design, site screening, characterization and safety of geological storage. This information can be used to either identify future geological candidates for acid gas disposal or reviewing the current operating conditions of licensed injection sites.Keywords: CO₂ storage, solubility trapping, convective dissolution, storage efficiency
Procedia PDF Downloads 2061135 Investigation of the Functional Impact of Amblyopia on Visual Skills in Children
Authors: Chinmay V. Deshpande
Abstract:
Purpose: To assess the efficiency of visual functions and visual skills in strabismic & anisometropic amblyopes and to assess visual acuity and contrast sensitivity in anisometropic amblyopes with spectacles & contact lenses. Method: In a prospective clinical study, 32 children ageing from 5 to 15 years presenting with amblyopia in a pediatric department of Shri Ganapati Netralaya Jalna, India, were assessed for a period of three & half months. Visual acuity was measured with Snellen’s and Bailey-Lovie log MAR charts whereas contrast sensitivity was measured with Pelli-Robson chart with spectacles and contact lenses. Saccadic movements were assessed with SCCO scoring criteria and accommodative facility was checked with ±1.50 DS flippers. Stereopsis was assessed with TNO test. Results: By using Wilcoxon sign rank test p-value < 0.05 (< 0.001), the mean linear visual acuity was 0.29 (≈ 6/21) and mean single optotype visual acuity found to be 0.36 (≈ 6/18). Mean visual acuity of 0.27(≈ 6/21) with spectacles improved to 0.33 (≈ 6/18) with contact lenses in amblyopic eyes. The mean Log MAR visual acuity with spectacles and contact lens were found to be 0.602( ≈6/24) and 0.531(≈ 6/21) respectively. The contrast threshold out of 20 amblyopic eyes shows that mean contrast threshold changed in 9 patients from spectacles 0.27 to contact lens 0.19 respectively. The mean accommodative facility assessed was 5.31(± 2.37). 24 subjects (75%) revealed marked saccadic defects on the test applied. 78% subjects didn’t show even gross stereoscopic ability on TNO test. Conclusion: This study supports the facts about amblyopia and associated deficits in visual skills which are claimed in previous studies. In addition, anisometropic amblyopia can be managed better with contact lenses.Keywords: strabismus, anisometropia, amblyopia, contrast sensitivity, saccades, stereopsis
Procedia PDF Downloads 4211134 Comprehensive Validation of High-Performance Liquid Chromatography-Diode Array Detection (HPLC-DAD) for Quantitative Assessment of Caffeic Acid in Phenolic Extracts from Olive Mill Wastewater
Authors: Layla El Gaini, Majdouline Belaqziz, Meriem Outaki, Mariam Minhaj
Abstract:
In this study, it introduce and validate a high-performance liquid chromatography method with diode-array detection (HPLC-DAD) specifically designed for the accurate quantification of caffeic acid in phenolic extracts obtained from olive mill wastewater. The separation process of caffeic acid was effectively achieved through the use of an Acclaim Polar Advantage column (5µm, 250x4.6mm). A meticulous multi-step gradient mobile phase was employed, comprising water acidified with phosphoric acid (pH 2.3) and acetonitrile, to ensure optimal separation. The diode-array detection was adeptly conducted within the UV–VIS spectrum, spanning a range of 200–800 nm, which facilitated precise analytical results. The method underwent comprehensive validation, addressing several essential analytical parameters, including specificity, repeatability, linearity, as well as the limits of detection and quantification, alongside measurement uncertainty. The generated linear standard curves displayed high correlation coefficients, underscoring the method's efficacy and consistency. This validated approach is not only robust but also demonstrates exceptional reliability for the focused analysis of caffeic acid within the intricate matrices of wastewater, thus offering significant potential for applications in environmental and analytical chemistry.Keywords: high-performance liquid chromatography (HPLC-DAD), caffeic acid analysis, olive mill wastewater phenolics, analytical method validation
Procedia PDF Downloads 701133 Arterial Compliance Measurement Using Split Cylinder Sensor/Actuator
Authors: Swati Swati, Yuhang Chen, Robert Reuben
Abstract:
Coronary stents are devices resembling the shape of a tube which are placed in coronary arteries, to keep the arteries open in the treatment of coronary arterial diseases. Coronary stents are routinely deployed to clear atheromatous plaque. The stent essentially applies an internal pressure to the artery because its structure is cylindrically symmetrical and this may introduce some abnormalities in final arterial shape. The goal of the project is to develop segmented circumferential arterial compliance measuring devices which can be deployed (eventually) in vivo. The segmentation of the device will allow the mechanical asymmetry of any stenosis to be assessed. The purpose will be to assess the quality of arterial tissue for applications in tailored stents and in the assessment of aortic aneurism. Arterial distensibility measurement is of utmost importance to diagnose cardiovascular diseases and for prediction of future cardiac events or coronary artery diseases. In order to arrive at some generic outcomes, a preliminary experimental set-up has been devised to establish the measurement principles for the device at macro-scale. The measurement methodology consists of a strain gauge system monitored by LABVIEW software in a real-time fashion. This virtual instrument employs a balloon within a gelatine model contained in a split cylinder with strain gauges fixed on it. The instrument allows automated measurement of the effect of air-pressure on gelatine and measurement of strain with respect to time and pressure during inflation. Compliance simple creep model has been applied to the results for the purpose of extracting some measures of arterial compliance. The results obtained from the experiments have been used to study the effect of air pressure on strain at varying time intervals. The results clearly demonstrate that with decrease in arterial volume and increase in arterial pressure, arterial strain increases thereby decreasing the arterial compliance. The measurement system could lead to development of portable, inexpensive and small equipment and could prove to be an efficient automated compliance measurement device.Keywords: arterial compliance, atheromatous plaque, mechanical symmetry, strain measurement
Procedia PDF Downloads 2791132 Modelling and Simulation Efforts in Scale-Up and Characterization of Semi-Solid Dosage Forms
Authors: Saurav S. Rath, Birendra K. David
Abstract:
Generic pharmaceutical industry has to operate in strict timelines of product development and scale-up from lab to plant. Hence, detailed product & process understanding and implementation of appropriate mechanistic modelling and Quality-by-design (QbD) approaches are imperative in the product life cycle. This work provides example cases of such efforts in topical dosage products. Topical products are typically in the form of emulsions, gels, thick suspensions or even simple solutions. The efficacy of such products is determined by characteristics like rheology and morphology. Defining, and scaling up the right manufacturing process with a given set of ingredients, to achieve the right product characteristics presents as a challenge to the process engineer. For example, the non-Newtonian rheology varies not only with CPPs and CMAs but also is an implicit function of globule size (CQA). Hence, this calls for various mechanistic models, to help predict the product behaviour. This paper focusses on such models obtained from computational fluid dynamics (CFD) coupled with population balance modelling (PBM) and constitutive models (like shear, energy density). In a special case of the use of high shear homogenisers (HSHs) for the manufacture of thick emulsions/gels, this work presents some findings on (i) scale-up algorithm for HSH using shear strain, a novel scale-up parameter for estimating mixing parameters, (ii) non-linear relationship between viscosity and shear imparted into the system, (iii) effect of hold time on rheology of product. Specific examples of how this approach enabled scale-up across 1L, 10L, 200L, 500L and 1000L scales will be discussed.Keywords: computational fluid dynamics, morphology, quality-by-design, rheology
Procedia PDF Downloads 269