Search results for: leadership models
1774 3D Medical Printing the Key Component in Future of Medical Applications
Authors: Zahra Asgharpour, Eric Renteria, Sebastian De Boodt
Abstract:
There is a growing trend towards personalization of medical care, as evidenced by the emphasis on outcomes based medicine, the latest developments in CT and MR imaging and personalized treatment in a variety of surgical disciplines. 3D Printing has been introduced and applied in the medical field since 2000. The first applications were in the field of dental implants and custom prosthetics. According to recent publications, 3D printing in the medical field has been used in a wide range of applications which can be organized into several categories including implants, prosthetics, anatomical models and tissue bioprinting. Some of these categories are still in their infancy stage of the concept of proof while others are in application phase such as the design and manufacturing of customized implants and prosthesis. The approach of 3D printing in this category has been successfully used in the health care sector to make both standard and complex implants within a reasonable amount of time. In this study, some of the clinical applications of 3D printing in design and manufacturing of a patient-specific hip implant would be explained. In cases where patients have complex bone geometries or are undergoing a complex revision on hip replacement, the traditional surgical methods are not efficient, and hence these patients require patient-specific approaches. There are major advantages in using this new technology for medical applications, however, in order to get this technology widely accepted in medical device industry, there is a need for gaining more acceptance from the medical device regulatory offices. This is a challenge that is moving onward and will help the technology find its way at the end as an accepted manufacturing method for medical device industry in an international scale. The discussion will conclude with some examples describing the future directions of 3D Medical Printing.Keywords: CT/MRI, image processing, 3D printing, medical devices, patient specific implants
Procedia PDF Downloads 3021773 The Application of Lesson Study Model in Writing Review Text in Junior High School
Authors: Sulastriningsih Djumingin
Abstract:
This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.Keywords: application, lesson study, review text, writing
Procedia PDF Downloads 2041772 Spectroscopic Relation between Open Cluster and Globular Cluster
Authors: Robin Singh, Mayank Nautiyal, Priyank Jain, Vatasta Koul, Vaibhav Sharma
Abstract:
The curiosity to investigate the space and its mysteries was dependably the main impetus of human interest, as the particle of livings exists from the "debut de l'Univers" (beginning of the Universe) typified with its few other living things. The sharp drive to uncover the secrets of stars and their unusual deportment was dependably an ignitor of stars investigation. As humankind lives in civilizations and states, stars likewise live in provinces named ‘clusters’. Clusters are separates into 2 composes i.e. open clusters and globular clusters. An open cluster is a gathering of thousand stars that were moulded from a comparable goliath sub-nuclear cloud and for the most part; contain Propulsion I (extremely metal-rich) and Propulsion II (mild metal-rich), where globular clusters are around gathering of more than thirty thousand stars that circles a galactic focus and basically contain Propulsion III (to a great degree metal-poor) stars. Futurology of this paper lies in the spectroscopic investigation of globular clusters like M92 and NGC419 and open clusters like M34 and IC2391 in different color bands by using software like VIREO virtual observatory, Aladin, CMUNIWIN, and MS-Excel. Assessing the outcome Hertzsprung-Russel (HR) diagram with exemplary cosmological models like Einstein model, De Sitter and Planck survey demonstrate for a superior age estimation of respective clusters. Colour-Magnitude Diagram of these clusters was obtained by photometric analysis in g and r bands which further transformed into BV bands which will unravel the idea of stars exhibit in the individual clusters.Keywords: color magnitude diagram, globular clusters, open clusters, Einstein model
Procedia PDF Downloads 2281771 Optical Board as an Artificial Technology for a Peer Teaching Class in a Nigerian University
Authors: Azidah Abu Ziden, Adu Ifedayo Emmanuel
Abstract:
This study investigated the optical board as an artificial technology for peer teaching in a Nigerian university. A design and development research (DDR) design was adopted, which entailed the planning and testing of instructional design models adopted to produce the optical board. This research population involved twenty-five (25) peer-teaching students at a Nigerian university consisting of theatre arts, religion, and language education-related disciplines. Also, using a random sampling technique, this study selected eight (8) students to work on the optical board. Besides, this study introduced a research instrument titled lecturer assessment rubric containing 30-mark metrics for evaluating students’ teaching with the optical board. In this study, it was discovered that the optical board affords students acquisition of self-employment skills through their exposure to the peer teaching course, which is a teacher training module in Nigerian universities. It is evident in this study that students were able to coordinate their design and effectively develop the optical board without lecturer’s interference. This kind of achievement in this research shows that the Nigerian university curriculum had been designed with contents meant to spur students to create jobs after graduation, and effective implementation of the readily available curriculum contents is enough to imbue students with the needed entrepreneurial skills. It was recommended that the Federal Government of Nigeria (FGN) must discourage the poor implementation of Nigerian university curriculum and invest more in the betterment of the readily available curriculum instead of considering a synonymously acclaimed new curriculum for regurgitated teaching and learning process.Keywords: optical board, artificial technology, peer teaching, educational technology, Nigeria, Malaysia, university, glass, wood, electrical, improvisation
Procedia PDF Downloads 691770 Removal of Metal Ions (II) Using a Synthetic Bis(2-Pyridylmethyl)Amino-Chloroacetyl Chloride- Ethylenediamine-Grafted Graphene Oxide Sheets
Authors: Laroussi Chaabane, Emmanuel Beyou, Amel El Ghali, Mohammed Hassen V. Baouab
Abstract:
The functionalization of graphene oxide sheets by ethylenediamine (EDA) was accomplished followed by the grafting of bis(2-pyridylmethyl)amino group (BPED) onto the activated graphene oxide sheets in the presence of chloroacetylchloride (CAC) produced the martial [(Go-EDA-CAC)-BPED]. The physic-chemical properties of [(Go-EDA-CAC)-BPED] composites were investigated by Fourier transform infrared (FT-IR), X-ray photoelectron spectroscopy (XPs), Scanning electron microscopy (SEM) and Thermogravimetric analysis (TGA). Moreover, [(Go-EDA-CAC)-BPED] was used for removing M(II) (where M=Cu, Ni and Co) ions from aqueous solutions using a batch process. The effect of pH, contact time and temperature were investigated. More importantly, the [(Go-EDA-CAC)-BPED] adsorbent exhibited remarkable performance in capturing heavy metal ions from water. The maximum adsorption capacity values of Cu(II), Ni(II) and Co(II) on the [(GO-EDA-CAC)-BPED] at the pH of 7 is 3.05 mmol.g⁻¹, 3.25 mmol.g⁻¹ and 3.05 mmol.g⁻¹ respectively. To examine the underlying mechanism of the adsorption process, pseudo-first, pseudo-second-order, and intraparticle diffusion models were fitted to experimental kinetic data. Results showed that the pseudo-second-order equation was appropriate to describe the three metal ions adsorption by [(Go-EDA-CAC)-BPED]. Adsorption data were further analyzed by the Langmuir, Freundlich, and Jossensadsorption approaches. Additionally, the adsorption properties of the [(Go-EDA-CAC)-BPED], their reusability (more than 10 cycles) and durability in the aqueous solutions open the path to removal of metal ions (Cu(II), Ni(II) and Co(II) from water solution. Based on the results obtained, we conclude that [(Go-EDA-CAC)-BPED] can be an effective and potential adsorbent for removing metal ions from an aqueous solution.Keywords: graphene oxide, bis(2-pyridylmethyl)amino, adsorption kinetics, isotherms
Procedia PDF Downloads 1351769 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function
Procedia PDF Downloads 3101768 The Correlation between Three-Dimensional Implant Positions and Esthetic Outcomes of Single-Tooth Implant Restoration
Authors: Pongsakorn Komutpol, Pravej Serichetaphongse, Soontra Panmekiate, Atiphan Pimkhaokham
Abstract:
Statement of Problem: The important parameter of esthetic assessment in anterior maxillary implant include pink esthetic of gingiva and white esthetic of restoration. While the 3 dimensional (3D) implant position are recently concerned as a key for succeeding in implant treatment. However, to our knowledge, the authors did not come across any publication that demonstrated the relations of esthetic outcome and 3D implant position. Objectives: To investigate the correlation between positional accuracy of single-tooth implant restoration (STIR) in all 3 dimensions and their esthetic outcomes. Materials and Methods: 17 patients’ data who had a STIR at central incisor with pristine contralateral tooth were included in this study. Intraoral photographs, dental models, and cone beam computed tomography (CBCT) images were retrieved. The esthetic outcome was assessed in accordance with pink esthetic score and white esthetic score (PES/WES). While the number of correct position in each dimension (mesiodistal, labiolingual, apicocoronal) of the implant were evaluated and defined as 'right' or 'wrong' according to ITI consensus conference by one investigator using CBCT data. The different mean score between right and wrong position in all dimensions was analyzed by Mann-Whitney U test with 0.05 was the significant level of the study. Results: The average score of PES/WES was 15.88 ± 1.65 which was considered as clinically acceptable. The average PES/WES score in 1, 2 and 3 right dimension of the implant position were 16.71, 15.75 and 15.17 respectively. None of the implants placed wrongly in all three dimensions. Statistically significant difference of the PES/WES score was found between the implants that placed right in 3 dimensions and 1 dimension (p = 0.041). Conclusion: This study supported the principle of 3D position of implant. The more properly implant was placed, the higher esthetic outcome was found.Keywords: accuracy, dental implant, esthetic, 3D implant position
Procedia PDF Downloads 1841767 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential
Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag
Abstract:
Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.Keywords: climate, reanalysis, renewable energy, solar radiation
Procedia PDF Downloads 2101766 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 1021765 Two-Sided Information Dissemination in Takeovers: Disclosure and Media
Authors: Eda Orhun
Abstract:
Purpose: This paper analyzes a target firm’s decision to voluntarily disclose information during a takeover event and the effect of such disclosures on the outcome of the takeover. Such voluntary disclosures especially in the form of earnings forecasts made around takeover events may affect shareholders’ decisions about the target firm’s value and in return takeover result. This study aims to shed light on this question. Design/methodology/approach: The paper tries to understand the role of voluntary disclosures by target firms during a takeover event in the likelihood of takeover success both theoretically and empirically. A game-theoretical model is set up to analyze the voluntary disclosure decision of a target firm to inform the shareholders about its real worth. The empirical implication of model is tested by employing binary outcome models where the disclosure variable is obtained by identifying the target firms in the sample that provide positive news by issuing increasing management earnings forecasts. Findings: The model predicts that a voluntary disclosure of positive information by the target decreases the likelihood that the takeover succeeds. The empirical analysis confirms this prediction by showing that positive earnings forecasts by target firms during takeover events increase the probability of takeover failure. Overall, it is shown that information dissemination through voluntary disclosures by target firms is an important factor affecting takeover outcomes. Originality/Value: This study is the first to the author's knowledge that studies the impact of voluntary disclosures by the target firm during a takeover event on the likelihood of takeover success. The results contribute to information economics, corporate finance and M&As literatures.Keywords: takeovers, target firm, voluntary disclosures, earnings forecasts, takeover success
Procedia PDF Downloads 3201764 The Effects of Changes in Accounting Standards on Loan Loss Provisions (LLP) as Earnings Management Device: Evidence from Malaysia and Nigeria Banks (Part I)
Authors: Ugbede Onalo, Mohd Lizam, Ahmad Kaseri
Abstract:
In view of dearth of studies on changes in accounting standards and banks’ earnings management particularly in the context of emerging economies, and the recent Malaysia and Nigeria change from their respective local GAAP to IFRS, this study deemed it overwhelming to investigate the effects of the switch on banks’ earnings management focusing on LLP as the manipulative device. This study employed judgmental sampling to select twenty eight banks- eight Malaysia and twenty Nigeria banks as sample covering period 2008-2013. To provide an empirical research setting in pursuant of the objective of this study, the study period is further partitioned into pre (2008, 2009, 2010) and post (2011, 2012, 2013) IFRS adoption periods. This study consistent with previous studies models a LLP regression model to investigate specific discretionary accruals of banks. Findings suggest that Malaysia and Nigeria banks individually use LLP to manage reported earnings more prior to IFRS implementation. Comparative overall results evidenced that the pre IFRS adoption or domestic GAAP era for both Malaysia and Nigeria sample banks is associated with higher prevalent earnings management through LLP than the corresponding post IFRS adoption era in diverse magnitude but in favour of Malaysia banks for both periods. With results demonstrating that IFRS adoption is linked to lower earnings management via LLP, this study therefore recommends the global adoption of IFRS as reporting framework. This study also endorses that Nigeria banks embrace and borrow a leaf from Malaysia banks good corporate governance practices.Keywords: accounting standards, IFRS, FRS, SAS, LLP, earnings management
Procedia PDF Downloads 4051763 Entrepreneur Universal Education System: Future Evolution
Authors: Khaled Elbehiery, Hussam Elbehiery
Abstract:
The success of education is dependent on evolution and adaptation, while the traditional system has worked before, one type of education evolved with the digital age is virtual education that has influenced efficiency in today’s learning environments. Virtual learning has indeed proved its efficiency to overcome the drawbacks of the physical environment such as time, facilities, location, etc., but despite what it had accomplished, the educational system over all is not adequate for being a productive system yet. Earning a degree is not anymore enough to obtain a career job; it is simply missing the skills and creativity. There are always two sides of a coin; a college degree or a specialized certificate, each has its own merits, but having both can put you on a successful IT career path. For many of job-seeking individuals across world to have a clear meaningful goal for work and education and positively contribute the community, a productive correlation and cooperation among employers, universities alongside with the individual technical skills is a must for generations to come. Fortunately, the proposed research “Entrepreneur Universal Education System” is an evolution to meet the needs of both employers and students, in addition to gaining vital and real-world experience in the chosen fields is easier than ever. The new vision is to empower the education to improve organizations’ needs which means improving the world as its primary goal, adopting universal skills of effective thinking, effective action, effective relationships, preparing the students through real-world accomplishment and encouraging them to better serve their organization and their communities faster and more efficiently.Keywords: virtual education, academic degree, certificates, internship, amazon web services, Microsoft Azure, Google Cloud Platform, hybrid models
Procedia PDF Downloads 971762 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images
Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez
Abstract:
Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking
Procedia PDF Downloads 1091761 Integrating Optuna and Synthetic Data Generation for Optimized Medical Transcript Classification Using BioBERT
Authors: Sachi Nandan Mohanty, Shreya Sinha, Sweeti Sah, Shweta Sharma4
Abstract:
The advancement of natural language processing has majorly influenced the field of medical transcript classification, providing a robust framework for enhancing the accuracy of clinical data processing. It has enormous potential to transform healthcare and improve people's livelihoods. This research focuses on improving the accuracy of medical transcript categorization using Bidirectional Encoder Representations from Transformers (BERT) and its specialized variants, including BioBERT, ClinicalBERT, SciBERT, and BlueBERT. The experimental work employs Optuna, an optimization framework, for hyperparameter tuning to identify the most effective variant, concluding that BioBERT yields the best performance. Furthermore, various optimizers, including Adam, RMSprop, and Layerwise adaptive large batch optimization (LAMB), were evaluated alongside BERT's default AdamW optimizer. The findings show that the LAMB optimizer achieves a performance that is equally good as AdamW's. Synthetic data generation techniques from Gretel were utilized to augment the dataset, expanding the original dataset from 5,000 to 10,000 rows. Subsequent evaluations demonstrated that the model maintained its performance with synthetic data, with the LAMB optimizer showing marginally better results. The enhanced dataset and optimized model configurations improved classification accuracy, showcasing the efficacy of the BioBERT variant and the LAMB optimizer. It resulted in an accuracy of up to 98.2% and 90.8% for the original and combined datasets.Keywords: BioBERT, clinical data, healthcare AI, transformer models
Procedia PDF Downloads 51760 Measurements and Predictions of Hydrates of CO₂-rich Gas Mixture in Equilibrium with Multicomponent Salt Solutions
Authors: Abdullahi Jibril, Rod Burgass, Antonin Chapoy
Abstract:
Carbon dioxide (CO₂) is widely used in reservoirs to enhance oil and gas production, mixing with natural gas and other impurities in the process. However, hydrate formation frequently hinders the efficiency of CO₂-based enhanced oil recovery, causing pipeline blockages and pressure build-ups. Current hydrate prediction methods are primarily designed for gas mixtures with low CO₂ content and struggle to accurately predict hydrate formation in CO₂-rich streams in equilibrium with salt solutions. Given that oil and gas reservoirs are saline, experimental data for CO₂-rich streams in equilibrium with salt solutions are essential to improve these predictive models. This study investigates the inhibition of hydrate formation in a CO₂-rich gas mixture (CO₂, CH₄, N₂, H₂ at 84.73/15/0.19/0.08 mol.%) using multicomponent salt solutions at concentrations of 2.4 wt.%, 13.65 wt.%, and 27.3 wt.%. The setup, test fluids, methodology, and results for hydrates formed in equilibrium with varying salt solution concentrations are presented. Measurements were conducted using an isochoric pressure-search method at pressures up to 45 MPa. Experimental data were compared with predictions from a thermodynamic model based on the Cubic-Plus-Association equation of state (EoS), while hydrate-forming conditions were modeled using the van der Waals and Platteeuw solid solution theory. Water activity was evaluated based on hydrate suppression temperature to assess consistency in the inhibited systems. Results indicate that hydrate stability is significantly influenced by inhibitor concentration, offering valuable guidelines for the design and operation of pipeline systems involved in offshore gas transport of CO₂-rich streams.Keywords: CO₂-rich streams, hydrates, monoethylene glycol, phase equilibria
Procedia PDF Downloads 241759 Impact of Out-Of-Pocket Payments on Health Care Finance and Access to Health Care Services: The Case of Health Transformation Program in Turkey
Authors: Bengi Demirci
Abstract:
Out-of-pocket payments have become one of the common models adopted by health care reforms all over the world, and they have serious implications for not only the financial set-up of the health care systems in question but also for the people involved in terms of their access to the health care services provided. On the one hand, out-of-pocket payments are used in raising resources for the finance of the health care system and in decreasing non-essential health care expenses by having a deterrent role on the patients. On the other hand, out-of-pocket payment model causes regressive distribution effect by putting more burdens on the lower income groups and making them refrain from using health care services. Being a relatively incipient country having adopted the out-of-pocket payment model within the context of its Health Transformation Program which has been ongoing since the early 2000s, Turkey provides a good case for re-evaluating the pros and cons of this model in order not to sacrifice equality in access to health care for raising revenue for health care finance and vice versa. Therefore this study aims at analyzing the impact of out-of-pocket payments on the health finance system itself and on the patients’ access to healthcare services in Turkey where out-of-pocket payment model has been in use for a while. In so doing, data showing the revenue obtained from out-of-pocket payments and their share in health care finance are analyzed. In addition to this, data showing the change in the amount of expenditure made by patients on health care services after the adoption of out-of-pocket payments and the change in the use of various health care services in the meanwhile are examined. It is important for the incipient countries like Turkey to be careful in striking the right balance between the objective of cost efficiency and that of equality in accessing health care services while adopting the out-of-pocket payment model.Keywords: health care access, health care finance, health reform, out-of-pocket payments
Procedia PDF Downloads 3751758 Seismic Loss Assessment for Peruvian University Buildings with Simulated Fragility Functions
Authors: Jose Ruiz, Jose Velasquez, Holger Lovon
Abstract:
Peruvian university buildings are critical structures for which very little research about its seismic vulnerability is available. This paper develops a probabilistic methodology that predicts seismic loss for university buildings with simulated fragility functions. Two university buildings located in the city of Cusco were analyzed. Fragility functions were developed considering seismic and structural parameters uncertainty. The fragility functions were generated with the Latin Hypercube technique, an improved Montecarlo-based method, which optimizes the sampling of structural parameters and provides at least 100 reliable samples for every level of seismic demand. Concrete compressive strength, maximum concrete strain and yield stress of the reinforcing steel were considered as the key structural parameters. The seismic demand is defined by synthetic records which are compatible with the elastic Peruvian design spectrum. Acceleration records are scaled based on the peak ground acceleration on rigid soil (PGA) which goes from 0.05g to 1.00g. A total of 2000 structural models were considered to account for both structural and seismic variability. These functions represent the overall building behavior because they give rational information regarding damage ratios for defined levels of seismic demand. The university buildings show an expected Mean Damage Factor of 8.80% and 19.05%, respectively, for the 0.22g-PGA scenario, which was amplified by the soil type coefficient and resulted in 0.26g-PGA. These ratios were computed considering a seismic demand related to 10% of probability of exceedance in 50 years which is a requirement in the Peruvian seismic code. These results show an acceptable seismic performance for both buildings.Keywords: fragility functions, university buildings, loss assessment, Montecarlo simulation, latin hypercube
Procedia PDF Downloads 1451757 Effects of a Bioactive Subfraction of Strobilanthes Crispus on the Tumour Growth, Body Weight and Haematological Parameters in 4T1-Induced Breast Cancer Model
Authors: Yusha'u Shu'aibu Baraya, Kah Keng Wong, Nik Soriani Yaacob
Abstract:
Strobilanthes crispus (S. crispus), is a Malaysian herb locally known as ‘Pecah kaca’ or ‘Jin batu’ which have demonstrated potent anticancer effects in both in vitro and in vivo models. In particular, S. crispus subfraction (SCS) significantly reduced tumor growth in N-methyl-N-Nitrosourea-induced breast cancer rat model. However, there is paucity of information on the effects of SCS in breast cancer metastasis. Thus, in this study, the antimetastatic effects of SCS (100 mg/kg) was investigated following 30 days of treatment in 4T1-induced mammary tumor (n = 5) model. The response to treatment was assessed based on the outcome of the tumour growth, body weight and hematological parameters. The results demonstrated that tumor bearing mice treated with SCS (TM-S) had significant (p<0.05) reduction in the mean tumor number and tumor volume as well as tumor weight compared to the tumor bearing mice (TM), i.e. tumor untreated group. Also, there was no secondary tumor formation or tumor-associated lesions in the major organs of TM-S compared to the TM group. Similarly, comparable body weights were observed among the TM-S, normal (uninduced) mice treated with SCS and normal (untreated/control) mice (NM) groups compared to the TM group (p<0.05). Furthermore, SCS administration does not cause significant changes in the hematological parameters as compared to the NM group, which indicates no sign of anemia and toxicity related effects. In conclusion, SCS significantly inhibited the overall tumor growth and metastasis in 4T1-induced breast cancer mouse model suggesting its promising potentials as therapeutic agent for breast cancer treatment.Keywords: 4T1-cells, breast cancer, metastasis, Strobilanthes crispus
Procedia PDF Downloads 1531756 Design, Synthesis and Evaluation of 4-(Phenylsulfonamido)Benzamide Derivatives as Selective Butyrylcholinesterase Inhibitors
Authors: Sushil Kumar Singh, Ashok Kumar, Ankit Ganeshpurkar, Ravi Singh, Devendra Kumar
Abstract:
In spectrum of neurodegenerative diseases, Alzheimer’s disease (AD) is characterized by the presence of amyloid β plaques and neurofibrillary tangles in the brain. It results in cognitive and memory impairment due to loss of cholinergic neurons, which is considered to be one of the contributing factors. Donepezil, an acetylcholinesterase (AChE) inhibitor which also inhibits butyrylcholinesterase (BuChE) and improves the memory and brain’s cognitive functions, is the most successful and prescribed drug to treat the symptoms of AD. The present work is based on designing of the selective BuChE inhibitors using computational techniques. In this work, machine learning models were trained using classification algorithms followed by screening of diverse chemical library of compounds. The various molecular modelling and simulation techniques were used to obtain the virtual hits. The amide derivatives of 4-(phenylsulfonamido) benzoic acid were synthesized and characterized using 1H & 13C NMR, FTIR and mass spectrometry. The enzyme inhibition assays were performed on equine plasma BuChE and electric eel’s AChE by method developed by Ellman et al. Compounds 31, 34, 37, 42, 49, 52 and 54 were found to be active against equine BuChE. N-(2-chlorophenyl)-4-(phenylsulfonamido)benzamide and N-(2-bromophenyl)-4-(phenylsulfonamido)benzamide (compounds 34 and 37) displayed IC50 of 61.32 ± 7.21 and 42.64 ± 2.17 nM against equine plasma BuChE. Ortho-substituted derivatives were more active against BuChE. Further, the ortho-halogen and ortho-alkyl substituted derivatives were found to be most active among all with minimal AChE inhibition. The compounds were selective toward BuChE.Keywords: Alzheimer disease, butyrylcholinesterase, machine learning, sulfonamides
Procedia PDF Downloads 1411755 The Amorphousness of the Exposure Sphere
Authors: Nipun Ansal
Abstract:
People guard their beliefs and opinions with their lives. Beliefs that they’ve formed over a period of time, and can go to any lengths to defy, desist from, resist and negate any outward stimulus that has the potential to shake them. Cognitive dissonance is term used to describe it in theory. And every human being, in order to defend himself from cognitive dissonance applies 4 rings of defense viz. Selective Exposure, Selective Perception, Selective Attention, and Selective Retention. This paper is a discursive analysis on how the onslaught of social media, complete with its intrusive weaponry, has amorphized the external ring of defense: the selective exposure. The stimulus-response model of communication is one of the most inherent model that encompasses communication behaviours of children and elderly, individual and masses, humans and animals alike. The paper deliberates on how information bombardment through the uncontrollable channels of the social media, Facebook and Twitter in particular, have dismantled our outer sphere of exposure, leading users online to a state of constant dissonance, and thus feeding impulsive action-taking. It applies case study method citing an example to corroborate how knowledge generation has given in to the information overload and the effect it has on decision making. With stimulus increasing in number of encounters, opinion formation precedes knowledge because of the increased demand of participation and decrease in time for the information to permeate from the outer sphere of exposure to the sphere of retention, which of course, is through perception and attention. This paper discusses the challenge posed by this fleeting, stimulus rich, peer-dominated media on the traditional models of communication and meaning-generation.Keywords: communication, discretion, exposure, social media, stimulus
Procedia PDF Downloads 4101754 Analysis of Exploitation Damages of the Frame Scaffolding
Authors: A. Robak, M. Pieńko, E. Błazik-Borowa, J. Bęc, I. Szer
Abstract:
The analyzes and classifications presented in the article were based on the research carried out in year 2016 and 2017 on a group of nearly one hundred scaffoldings assembled and used on construction sites in different parts of Poland. During scaffolding selection process efforts were made to maintain diversification in terms of parameters such as scaffolding size, investment size, type of investment, location and nature of conducted works. This resulted in the research being carried out on scaffoldings used for church renovation in a small town or attached to the facades of classic apartment blocks, as well as on scaffoldings used during construction of skyscrapers or facilities of the largest power plants. This variety allows to formulate general conclusions about the technical condition of used frame scaffoldings. Exploitation damages of the frame scaffolding elements were divided into three groups. The first group includes damages to the main structural components, which reduce the strength of the scaffolding elements and hence the whole structure. The qualitative analysis of these damages was made on the basis of numerical models that take into account the geometry of the damage and on the basis of computational nonlinear static analyzes. The second group focuses on exploitation damages such as the lack of a pin on the guardrail bolt which may cause an imminent threat to people using scaffolding. These are local damages that do not affect the bearing capacity and stability of the whole structure but are very important for safe use. The last group consider damages that reduce only aesthetic values and do not have direct impact on bearing capacity and safety of use. Apart from qualitative analyzes the article will present quantitative analyzes showing how frequently given type of damage occurs.Keywords: scaffolding, damage, safety, numerical analysis
Procedia PDF Downloads 2611753 Shortening Distances: The Link between Logistics and International Trade
Authors: Felipe Bedoya Maya, Agustina Calatayud, Vileydy Gonzalez Mejia
Abstract:
Encompassing inventory, warehousing, and transportation management, logistics is a crucial predictor of firm performance. This has been extensively proven by extant literature in business and operations management. Logistics is also a fundamental determinant of a country's ability to access international markets. Available studies in international and transport economics have shown that limited transport infrastructure and underperforming transport services can severely affect international competitiveness. However, the evidence lacks the overall impact of logistics performance-encompassing all inventory, warehousing, and transport components- on global trade. In order to fill this knowledge gap, the paper uses a gravitational trade model with 155 countries from all geographical regions between 2007 and 2018. Data on logistics performance is obtained from the World Bank's Logistics Performance Index (LPI). First, the relationship between logistics performance and a country’s total trade is estimated, followed by a breakdown by the economic sector. Then, the analysis is disaggregated according to the level of technological intensity of traded goods. Finally, after evaluating the intensive margin of trade, the relevance of logistics infrastructure and services for the extensive trade margin is assessed. Results suggest that: (i) improvements in both logistics infrastructure and services are associated with export growth; (ii) manufactured goods can significantly benefit from these improvements, especially when both exporting and importing countries increase their logistics performance; (iii) the quality of logistics infrastructure and services becomes more important as traded goods are technology-intensive; and (iv) improving the exporting country's logistics performance is essential in the intensive margin of trade while enhancing the importing country's logistics performance is more relevant in the extensive margin.Keywords: gravity models, infrastructure, international trade, logistics
Procedia PDF Downloads 2121752 The Synthesis, Structure and Catalytic Activity of Iron(II) Complex with New N2O2 Donor Schiff Base Ligand
Authors: Neslihan Beyazit, Sahin Bayraktar, Cahit Demetgul
Abstract:
Transition metal ions have an important role in biochemistry and biomimetic systems and may provide the basis of models for active sites of biological targets. The presence of copper(II), iron(II) and zinc(II) is crucial in many biological processes. Tetradentate N2O2 donor Schiff base ligands are well known to form stable transition metal complexes and these complexes have also applications in clinical and analytical fields. In this study, we present salient structural features and the details of cathecholase activity of Fe(II) complex of a new Schiff Base ligand. A new asymmetrical N2O2 donor Schiff base ligand and its Fe(II) complex were synthesized by condensation of 4-nitro-1,2 phenylenediamine with 6-formyl-7-hydroxy-5-methoxy-2-methylbenzopyran-4-one and by using an appropriate Fe(II) salt, respectively. Schiff base ligand and its metal complex were characterized by using FT-IR, 1H NMR, 13C NMR, UV-Vis, elemental analysis and magnetic susceptibility. In order to determine the kinetics parameters of catechol oxidase-like activity of Schiff base Fe(II) complex, the oxidation of the 3,5-di-tert-butylcatechol (3,5-DTBC) was measured at 25°C by monitoring the increase of the absorption band at 390-400 nm of the product 3,5-di-tert-butylcatequinone (3,5-DTBQ). The compatibility of catalytic reaction with Michaelis-Menten kinetics also investigated by the method of initial rates by monitoring the growth of the 390–400 nm band of 3,5-DTBQ as a function of time. Kinetic studies showed that Fe(II) complex of the new N2O2 donor Schiff base ligand was capable of acting as a model compound for simulating the catecholase properties of type-3 copper proteins.Keywords: catecholase activity, Michaelis-Menten kinetics, Schiff base, transition metals
Procedia PDF Downloads 3971751 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning
Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath
Abstract:
The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.Keywords: BLIP, fMRI, latent diffusion model, neural perception.
Procedia PDF Downloads 701750 Biodiversity and Climate Change: Consequences for Norway Spruce Mountain Forests in Slovakia
Authors: Jozef Mindas, Jaroslav Skvarenina, Jana Skvareninova
Abstract:
Study of the effects of climate change on Norway Spruce (Picea abies) forests has mainly focused on the diversity of tree species diversity of tree species as a result of the ability of species to tolerate temperature and moisture changes as well as some effects of disturbance regime changes. The tree species’ diversity changes in spruce forests due to climate change have been analyzed via gap model. Forest gap model is a dynamic model for calculation basic characteristics of individual forest trees. Input ecological data for model calculations have been taken from the permanent research plots located in primeval forests in mountainous regions in Slovakia. The results of regional scenarios of the climatic change for the territory of Slovakia have been used, from which the values are according to the CGCM3.1 (global) model, KNMI and MPI (regional) models. Model results for conditions of the climate change scenarios suggest a shift of the upper forest limit to the region of the present subalpine zone, in supramontane zone. N. spruce representation will decrease at the expense of beech and precious broadleaved species (Acer sp., Sorbus sp., Fraxinus sp.). The most significant tree species diversity changes have been identified for the upper tree line and current belt of dwarf pine (Pinus mugo) occurrence. The results have been also discussed in relation to most important disturbances (wind storms, snow and ice storms) and phenological changes which consequences are little known. Special discussion is focused on biomass production changes in relation to carbon storage diversity in different carbon pools.Keywords: biodiversity, climate change, Norway spruce forests, gap model
Procedia PDF Downloads 2891749 Academic Education and Internship towards Architecture Professional Practice
Authors: Sawsan Saridar masri, Hisham Arnaouty
Abstract:
Architecture both defines and is defined by social, cultural, political and financial constraints: this is where the discipline and the profession of architecture meet. This mutual sway evolves wherever interferences in the built environment are thought-out and can be strengthened or weakened by the many ways in which the practice of architecture can be undertaken. The more familiar we are about the concerns and factors that control what can be made, the greater the opportunities to propose and make appropriate architectures. Apparently, the criteria in any qualification policy should permit flexibility of approach and will – for reasons including cultural choice, political issues, and son on – vary significantly from country to country. However the weighting of the various criteria have to ensure adequate standards both in educational system as in the professional training. This paper develops, deepens and questions about the regulatory entry routes to the professional practice of architecture in the Arab world. It is also intended to provide an informed basis about strategies for conventional and unconventional models of practice in preparation for the next stages of architect’s work experience and professional experience. With the objective of promoting the implementation of adequate built environment in the practice of architecture, a comprehensive analysis of various pathways of access to the profession are selected as case studies, encompassing examples from across the world. The review of such case studies allows the creation of a comprehensive picture in relation to the conditions for qualification of practitioners of the built environment at the level of the Middle Eastern countries and the Arab World. Such investigation considers the following aspects: professional title and domain of practice, accreditation of courses, internship and professional training, professional examination and continuing professional development.Keywords: architecture, internship, mobility, professional practice
Procedia PDF Downloads 5471748 Integrated Design in Additive Manufacturing Based on Design for Manufacturing
Authors: E. Asadollahi-Yazdi, J. Gardan, P. Lafon
Abstract:
Nowadays, manufactures are encountered with production of different version of products due to quality, cost and time constraints. On the other hand, Additive Manufacturing (AM) as a production method based on CAD model disrupts the design and manufacturing cycle with new parameters. To consider these issues, the researchers utilized Design For Manufacturing (DFM) approach for AM but until now there is no integrated approach for design and manufacturing of product through the AM. So, this paper aims to provide a general methodology for managing the different production issues, as well as, support the interoperability with AM process and different Product Life Cycle Management tools. The problem is that the models of System Engineering which is used for managing complex systems cannot support the product evolution and its impact on the product life cycle. Therefore, it seems necessary to provide a general methodology for managing the product’s diversities which is created by using AM. This methodology must consider manufacture and assembly during product design as early as possible in the design stage. The latest approach of DFM, as a methodology to analyze the system comprehensively, integrates manufacturing constraints in the numerical model in upstream. So, DFM for AM is used to import the characteristics of AM into the design and manufacturing process of a hybrid product to manage the criteria coming from AM. Also, the research presents an integrated design method in order to take into account the knowledge of layers manufacturing technologies. For this purpose, the interface model based on the skin and skeleton concepts is provided, the usage and manufacturing skins are used to show the functional surface of the product. Also, the material flow and link between the skins are demonstrated by usage and manufacturing skeletons. Therefore, this integrated approach is a helpful methodology for designer and manufacturer in different decisions like material and process selection as well as, evaluation of product manufacturability.Keywords: additive manufacturing, 3D printing, design for manufacturing, integrated design, interoperability
Procedia PDF Downloads 3171747 Thin Films of Glassy Carbon Prepared by Cluster Deposition
Authors: Hatem Diaf, Patrice Melinon, Antonio Pereira, Bernard Moine, Nicholas Blanchard, Florent Bourquard, Florence Garrelie, Christophe Donnet
Abstract:
Glassy carbon exhibits excellent biological compatibility with live tissues meaning it has high potential for applications in life science. Moreover, glassy carbon has interesting properties including 'high temperature resistance', hardness, low density, low electrical resistance, low friction, and low thermal resistance. The structure of glassy carbon has long been a subject of debate. It is now admitted that glassy carbon is 100% sp2. This term is a little bit confusing as long sp2 hybridization defined from quantum chemistry is related to both properties: threefold configuration and pi bonding (parallel pz orbitals). Using plasma laser deposition of carbon clusters combined with pulsed nano/femto laser annealing, we are able to synthesize thin films of glassy carbon of good quality (probed by G band/ D disorder band ratio in Raman spectroscopy) without thermal post annealing. A careful inspecting of Raman signal, plasmon losses and structure performed by HRTEM (High Resolution Transmission Electron Microscopy) reveals that both properties (threefold and pi orbitals) cannot coexist together. The structure of the films is compared to models including schwarzites based from negatively curved surfaces at the opposite of onions or fullerene-like structures with positively curved surfaces. This study shows that a huge collection of porous carbon named vitreous carbon with different structures can coexist.Keywords: glassy carbon, cluster deposition, coating, electronic structure
Procedia PDF Downloads 3211746 Physical Dynamics of Planet Earth and Their Implications for Global Climate Change and Mitigation: A Case Study of Sistan Plain, Balochistan Region, Southeastern Iran
Authors: Hamidoddin Yousefi, Ahmad Nikbakht
Abstract:
The Sistan Plain, situated in the Balochistan region of southeastern Iran, is renowned for its arid climatic conditions and prevailing winds that persist for approximately 120 days annually. The region faces multiple challenges, including drought susceptibility, exacerbated by wind erosion, temperature fluctuations, and the influence of policies implemented by neighboring Afghanistan and Iran. This study focuses on investigating the characteristics of jet streams within the Sistan Plain and their implications for global climate change. Various models are employed to analyze convective mass fluxes, horizontal moisture transport, temporal variance, and the calculation of radiation convective equilibrium within the atmosphere. Key considerations encompass the distribution of relative humidity, dry air, and absolute humidity. Moreover, the research aims to predict the interplay between jet streams and human activities, particularly regarding their environmental impacts and water scarcity. The investigation encompasses both local and global environmental consequences, drawing upon historical climate change data and comprehensive field research. The anticipated outcomes of this study hold substantial potential for mitigating global climate change and its associated environmental ramifications. By comprehending the dynamics of jet streams and their interconnections with human activities, effective strategies can be formulated to address water scarcity and minimize environmental degradation.Keywords: Sistani plain, Baluchistan, Hamoun lake, climate change, jet streams, environmental impact, water scarcity, mitigation
Procedia PDF Downloads 751745 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 157