Search results for: evaluation organizational performance
2869 Managerial Advice-Seeking and Supply Chain Resilience: A Social Capital Perspective
Authors: Ethan Nikookar, Yalda Boroushaki, Larissa Statsenko, Jorge Ochoa Paniagua
Abstract:
Given the serious impact that supply chain disruptions can have on a firm's bottom-line performance, both industry and academia are interested in supply chain resilience, a capability of the supply chain that enables it to cope with disruptions. To date, much of the research has focused on the antecedents of supply chain resilience. This line of research has suggested various firm-level capabilities that are associated with greater supply chain resilience. A consensus has emerged among researchers that supply chain flexibility holds the greatest potential to create resilience. Supply chain flexibility achieves resilience by creating readiness to respond to disruptions with little cost and time by means of reconfiguring supply chain resources to mitigate the impacts of the disruption. Decisions related to supply chain disruptions are made by supply chain managers; however, the role played by supply chain managers' reference networks has been overlooked in the supply chain resilience literature. This study aims to understand the impact of supply chain managers on their firms' supply chain resilience. Drawing on social capital theory and social network theory, this paper proposes a conceptual model to explore the role of supply chain managers in developing the resilience of supply chains. Our model posits that higher level of supply chain managers' embeddedness in their reference network is associated with increased resilience of their firms' supply chain. A reference network includes individuals from whom supply chain managers seek advice on supply chain related matters. The relationships between supply chain managers' embeddedness in reference network and supply chain resilience are mediated by supply chain flexibility.Keywords: supply chain resilience, embeddedness, reference networks, social capitals
Procedia PDF Downloads 2272868 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 2542867 The Expression of Lipoprotein Lipase Gene with Fat Accumulations and Serum Biochemical Levels in Betong (KU Line) and Broiler Chickens
Authors: W. Loongyai, N. Saengsawang, W. Danvilai, C. Kridtayopas, P. Sopannarath, C. Bunchasak
Abstract:
Betong chicken is a slow growing and a lean strain of chicken, while the rapid growth of broiler is accompanied by increased fat. We investigated the growth performance, fat accumulations, lipid serum biochemical levels and lipoprotein lipase (LPL) gene expression of female Betong (KU line) at the age of 4 and 6 weeks. A total of 80 female Betong chickens (KU line) and 80 female broiler chickens were reared under open system (each group had 4 replicates of 20 chicks per pen). The results showed that feed intake and average daily gain (ADG) of broiler chicken were significantly higher than Betong (KU line) (P < 0.01), while feed conversion ratio (FCR) of Betong (KU line) at week 6 were significantly lower than broiler chicken (P < 0.01) at 6 weeks. At 4 and 6 weeks, two birds per replicate were randomly selected and slaughtered. Carcass weight did not significantly differ between treatments; the percentage of abdominal fat and subcutaneous fat yield was higher in the broiler (P < 0.01) at 4 and 6 week. Total cholesterol and LDL level of broiler were higher than Betong (KU line) at 4 and 6 weeks (P < 0.05). Abdominal fat samples were collected for total RNA extraction. The cDNA was amplified using primers specific for LPL gene expression and analysed using real-time PCR. The results showed that the expression of LPL gene was not different when compared between Betong (KU line) and broiler chickens at the age of 4 and 6 weeks (P > 0.05). Our results indicated that broiler chickens had high growth rate and fat accumulation when compared with Betong (KU line) chickens, whereas LPL gene expression did not differ between breeds.Keywords: lipoprotein lipase gene, Betong (KU line), broiler, abdominal fat, gene expression
Procedia PDF Downloads 1782866 Eco-Friendly Approach in the Management of Stored Sorghum Insect Pests in Small-Scale Farmers’ Storage Structures of Northern Nigeria
Authors: Mohammed Suleiman, Ibrahim Sani, Samaila Abubakar, Kabir Abdullahi Bindawa
Abstract:
Farmers’ storage structures in Pauwa village of Katsina State, Northern Nigeria, were simulated and incorporated with the application of leaf powders of Euphorbia balsamifera Aiton, Lawsonia inermis L., Mitracarpus hirtus (L.) DC. and Senna obtusifolia L. to search for more eco-friendly methods of managing insect pests of stored sorghum. The four most commonly grown sorghum varieties in the study area, namely “Farar Kaura” (FK), “Jar Kaura” (JK), “Yar Gidan Daudu” (YGD), and ICSV400 in threshed forms were used for the study. The four varieties (2.50 kg each) were packed in small polypropylene bags, mixed with the leaf powders at the concentration of 5% (w/w) of the plants, and kept in small stores of the aforementioned village for 12 weeks. Insect pests recovered after 12 weeks were Sitophilus zeamais, Rhyzopertha dominica, Tribolium castaneum, Cryptolestes ferrugineus, and Oryzaephilus surinamensis. There were significantly fewer insect pests in treated sorghum than in untreated types (p < 0.05). More weight losses were recorded in untreated grains than in those treated with the botanical powders. In terms of varieties, grain weight losses were in the order FK > JK > YGD > ICSV400. The botanicals also showed significant (p < 0.05) protectant ability against the weevils with their performance as E. balsamifera > L. inermis > M. hirtus > S. obtusifolia.Keywords: botanical powders, infestations, insect pests, management, sorghum varieties, storage structures, weight losses
Procedia PDF Downloads 992865 Binderless Naturally-extracted Metal-free Electrocatalyst for Efficient NOₓ Reduction
Authors: Hafiz Muhammad Adeel Sharif, Tian Li, Changping Li
Abstract:
Recently, the emission of nitrogen-sulphur oxides (NOₓ, SO₂) has become a global issue and causing serious threats to health and the environment. Catalytic reduction of NOx and SOₓ gases into friendly gases is considered one of the best approaches. However, regeneration of the catalyst, higher bond-dissociation energy for NOx, i.e., 150.7 kcal/mol, escape of intermediate gas (N₂O, a greenhouse gas) with treated flue-gas, and limited activity of catalyst remains a great challenge. Here, a cheap, binderless naturally-extracted bass-wood thin carbon electrode (TCE) is presented, which shows excellent catalytic activity towards NOx reduction. The bass-wood carbonization at 900 ℃ followed by thermal activation in the presence of CO2 gas at 750 ℃. The thermal activation resulted in an increase in epoxy groups on the surface of the TCE and enhancement in the surface area as well as the degree of graphitization. The TCE unique 3D strongly inter-connected network through hierarchical micro/meso/macro pores that allow large electrode/electrolyte interface. Owing to these characteristics, the TCE exhibited excellent catalytic efficiency towards NOx (~83.3%) under ambient conditions and enhanced catalytic response under pH and sulphite exposure as well as excellent stability up to 168 hours. Moreover, a temperature-dependent activity trend was found where the highest catalytic activity was achieved at 80 ℃, beyond which the electrolyte became evaporative and resulted in a performance decrease. The designed electrocatalyst showed great potential for effective NOx-reduction, which is highly cost-effective, green, and sustainable.Keywords: electrocatalyst, NOx-reduction, bass-wood electrode, integrated wet-scrubbing, sustainable
Procedia PDF Downloads 752864 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins
Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi
Abstract:
A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid
Procedia PDF Downloads 2612863 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects
Authors: Ma Yuzhe, Burra Venkata Durga Kumar
Abstract:
The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.Keywords: Linux, operating system, system management, security
Procedia PDF Downloads 1082862 Project Time and Quality Management during Construction
Authors: Nahed Al-Hajeri
Abstract:
Time and cost is an integral part of every construction plan and can affect each party’s contractual obligations. The performance of both time and cost are usually important to the client and contractor during the project. Almost all construction projects are experiencing time overrun. These time overruns always contributed as expensive to both client and contractor. Construction of any project inside the gathering centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. It also involves many agencies interdependent on each other like the vendors, structural and functional designers including various types of specialized engineers and it includes support of contractors and specialized contractors. This paper mainly highlights the types of construction delays due to which project suffer time and cost overrun. This paper also speaks about the delay causes and factors that contribute to the construction sequence delay for the oil and gas projects. Construction delay is supposed to be one of the repeated problems in the construction projects and it has an opposing effect on project success in terms of time, cost and quality. Some effective methods are identified to minimize delays in construction projects such as: 1. Site management and supervision, 2. Effective strategic planning, 3. Clear information and communication channel. Our research paper studies the types of delay with some real examples with statistic results and suggests solutions to overcome this problem.Keywords: non-compensable delay, delays caused by force majeure, compensable delay, delays caused by the owner or the owner’s representative, non-excusable delay, delay caused by the contractor or the contractor’s representative, concurrent delay, delays resulting from two separate causes at the same time
Procedia PDF Downloads 2392861 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad Daba, Jean-Pierre Dubois
Abstract:
Multi path fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper have utilized a Poisson modulated and weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multi-diversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent specular Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.Keywords: cellular communication, femto and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process
Procedia PDF Downloads 4462860 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method
Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi
Abstract:
This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method
Procedia PDF Downloads 3902859 C-Spine Imaging in a Non-trauma Centre: Compliance with NEXUS Criteria Audit
Authors: Andrew White, Abigail Lowe, Kory Watkins, Hamed Akhlaghi, Nicole Winter
Abstract:
The timing and appropriateness of diagnostic imaging are critical to the evaluation and management of traumatic injuries. Within the subclass of trauma patients, the prevalence of c-spine injury is less than 4%. However, the incidence of delayed diagnosis within this cohort has been documented as up to 20%, with inadequate radiological examination most cited issue. In order to assess those in which c-spine injury cannot be fully excluded based on clinical examination alone and, therefore, should undergo diagnostic imaging, a set of criteria is used to provide clinical guidance. The NEXUS (National Emergency X-Radiography Utilisation Study) criteria is a validated clinical decision-making tool used to facilitate selective c-spine radiography. The criteria allow clinicians to determine whether cervical spine imaging can be safely avoided in appropriate patients. The NEXUS criteria are widely used within the Emergency Department setting given their ease of use and relatively straightforward application and are used in the Victorian State Trauma System’s guidelines. This audit utilized retrospective data collection to examine the concordance of c-spine imaging in trauma patients to that of the NEXUS criteria and assess compliance with state guidance on diagnostic imaging in trauma. Of the 183 patients that presented with trauma to the head, neck, or face (244 excluded due to incorrect triage), 98 did not undergo imaging of the c-spine. Out of those 98, 44% fulfilled at least one of the NEXUS criteria, meaning the c-spine could not be clinically cleared as per the current guidelines. The criterion most met was intoxication, comprising 42% (18 of 43), with midline spinal tenderness (or absence of documentation of this) the second most common with 23% (10 of 43). Intoxication being the most met criteria is significant but not unexpected given the cohort of patients seen at St Vincent’s and within many emergency departments in general. Given these patients will always meet NEXUS criteria, an element of clinical judgment is likely needed, or concurrent use of the Canadian C-Spine Rules to exclude the need for imaging. Midline tenderness as a met criterion was often in the context of poor or absent documentation relating to this, emphasizing the importance of clear and accurate assessments. The distracting injury was identified in 7 out of the 43 patients; however, only one of these patients exhibited a thoracic injury (T11 compression fracture), with the remainder comprising injuries to the extremities – some studies suggest that C-spine imaging may not be required in the evaluable blunt trauma patient despite distracting injuries in any body regions that do not involve the upper chest. This emphasises the need for standardised definitions for distracting injury, at least at a departmental/regional level. The data highlights the currently poor application of the NEXUS guidelines, with likely common themes throughout emergency departments, highlighting the need for further education regarding implementation and potential refinement/clarification of criteria. Of note, there appeared to be no significant differences between levels of experience with respect to inappropriately clearing the c-spine clinically with respect to the guidelines.Keywords: imaging, guidelines, emergency medicine, audit
Procedia PDF Downloads 702858 Modal Analysis of Functionally Graded Materials Plates Using Finite Element Method
Authors: S. J. Shahidzadeh Tabatabaei, A. M. Fattahi
Abstract:
Modal analysis of an FGM plate composed of Al2O3 ceramic phase and 304 stainless steel metal phases was performed in this paper by ABAQUS software with the assumption that the behavior of material is elastic and mechanical properties (Young's modulus and density) are variable in the thickness direction of the plate. Therefore, a sub-program was written in FORTRAN programming language and was linked with ABAQUS software. For modal analysis, a finite element analysis was carried out similar to the model of other researchers and the accuracy of results was evaluated after comparing the results. Comparison of natural frequencies and mode shapes reflected the compatibility of results and optimal performance of the program written in FORTRAN as well as high accuracy of finite element model used in this research. After validation of the results, it was evaluated the effect of material (n parameter) on the natural frequency. In this regard, finite element analysis was carried out for different values of n and in simply supported mode. About the effect of n parameter that indicates the effect of material on the natural frequency, it was observed that the natural frequency decreased as n increased; because by increasing n, the share of ceramic phase on FGM plate has decreased and the share of steel phase has increased and this led to reducing stiffness of FGM plate and thereby reduce in the natural frequency. That is because the Young's modulus of Al2O3 ceramic is equal to 380 GPa and Young's modulus of SUS304 steel is 207 GPa.Keywords: FGM plates, modal analysis, natural frequency, finite element method
Procedia PDF Downloads 3902857 Bilingualism Contributes to Cognitive Reserve in Parkinson's Disease
Authors: Arrate Barrenechea Garro
Abstract:
Background: Bilingualism has been shown to enhance cognitive reserve and potentially delay the onset of dementia symptoms. This study investigates the impact of bilingualism on cognitive reserve and the age of diagnosis in Parkinson's Disease (PD). Methodology: The study involves 16 non-demented monolingual PD patients and 12 non-demented bilingual PD patients, matched for age, sex, and years of education. All participants are native Spanish speakers, with Spanish as their first language (L1). Cognitive performance is assessed through a neuropsychological examination covering all cognitive domains. Cognitive reserve is measured using the Cognitive Reserve Index Questionnaire (CRIq), while language proficiency is evaluated using the Bilingual Language Profile (BLP). The age at diagnosis is recorded for both monolingual and bilingual patients. Results: Bilingual PD patients demonstrate higher scores on the CRIq compared to monolingual PD patients, with significant differences between the groups. Furthermore, there is a positive correlation between cognitive reserve (CRIq) and the utilization of the second language (L2) as indicated by the BLP. Bilingual PD patients are diagnosed, on average, three years later than monolingual PD patients. Conclusion: Bilingual PD patients exhibit higher levels of cognitive reserve compared to monolingual PD patients, as indicated by the CRIq scores. The utilization of the second language (L2) is positively correlated with cognitive reserve. Bilingual PD patients are diagnosed with PD, on average, three years later than monolingual PD patients. These findings suggest that bilingualism may contribute to cognitive reserve and potentially delay the onset of clinical symptoms associated with PD. This study adds to the existing literature supporting the relationship between bilingualism and cognitive reserve. Further research in this area could provide valuable insights into the potential protective effects of bilingualism in neurodegenerative disorders.Keywords: bilingualis, cogntiive reserve, diagnosis, parkinson's disease
Procedia PDF Downloads 992856 Classifier for Liver Ultrasound Images
Authors: Soumya Sajjan
Abstract:
Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix
Procedia PDF Downloads 4082855 Human 3D Metastatic Melanoma Models for in vitro Evaluation of Targeted Therapy Efficiency
Authors: Delphine Morales, Florian Lombart, Agathe Truchot, Pauline Maire, Pascale Vigneron, Antoine Galmiche, Catherine Lok, Muriel Vayssade
Abstract:
Targeted therapy molecules are used as a first-line treatment for metastatic melanoma with B-Raf mutation. Nevertheless, these molecules can cause side effects to patients and are efficient on 50 to 60 % of them. Indeed, melanoma cell sensitivity to targeted therapy molecules is dependent on tumor microenvironment (cell-cell and cell-extracellular matrix interactions). To better unravel factors modulating cell sensitivity to B-Raf inhibitor, we have developed and compared several melanoma models: from metastatic melanoma cells cultured as monolayer (2D) to a co-culture in a 3D dermal equivalent. Cell response was studied in different melanoma cell lines such as SK-MEL-28 (mutant B-Raf (V600E), sensitive to Vemurafenib), SK-MEL-3 (mutant B-Raf (V600E), resistant to Vemurafenib) and a primary culture of dermal human fibroblasts (HDFn). Assays have initially been performed in a monolayer cell culture (2D), then a second time on a 3D dermal equivalent (dermal human fibroblasts embedded in a collagen gel). All cell lines were treated with Vemurafenib (a B-Raf inhibitor) for 48 hours at various concentrations. Cell sensitivity to treatment was assessed under various aspects: Cell proliferation (cell counting, EdU incorporation, MTS assay), MAPK signaling pathway analysis (Western-Blotting), Apoptosis (TUNEL), Cytokine release (IL-6, IL-1α, HGF, TGF-β, TNF-α) upon Vemurafenib treatment (ELISA) and histology for 3D models. In 2D configuration, the inhibitory effect of Vemurafenib on cell proliferation was confirmed on SK-MEL-28 cells (IC50=0.5 µM), and not on the SK-MEL-3 cell line. No apoptotic signal was detected in SK-MEL-28-treated cells, suggesting a cytostatic effect of the Vemurafenib rather than a cytotoxic one. The inhibition of SK-MEL-28 cell proliferation upon treatment was correlated with a strong expression decrease of phosphorylated proteins involved in the MAPK pathway (ERK, MEK, and AKT/PKB). Vemurafenib (from 5 µM to 10 µM) also slowed down HDFn proliferation, whatever cell culture configuration (monolayer or 3D dermal equivalent). SK-MEL-28 cells cultured in the dermal equivalent were still sensitive to high Vemurafenib concentrations. To better characterize all cell population impacts (melanoma cells, dermal fibroblasts) on Vemurafenib efficacy, cytokine release is being studied in 2D and 3D models. We have successfully developed and validated a relevant 3D model, mimicking cutaneous metastatic melanoma and tumor microenvironment. This 3D melanoma model will become more complex by adding a third cell population, keratinocytes, allowing us to characterize the epidermis influence on the melanoma cell sensitivity to Vemurafenib. In the long run, the establishment of more relevant 3D melanoma models with patients’ cells might be useful for personalized therapy development. The authors would like to thank the Picardie region and the European Regional Development Fund (ERDF) 2014/2020 for the funding of this work and Oise committee of "La ligue contre le cancer".Keywords: 3D human skin model, melanoma, tissue engineering, vemurafenib efficiency
Procedia PDF Downloads 3022854 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 62853 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 1312852 Modified Side Plate Design to Suppress Lateral Torsional Buckling of H-Beam for Seismic Application
Authors: Erwin, Cheng-Cheng Chen, Charles J. Salim
Abstract:
One of the method to solve the lateral torsional buckling (LTB) problem is by using side plates to increased the buckling resistance of the beam. Some modifications in designing the side plates are made in this study to simplify the construction in the field and reduce the cost. At certain region, side plates are not added: (1) At the beam end to preserve some spaces for bolt installation, but the beam is strengthened by adding cover plate at both flanges and (2) at the middle span of the beam where the moment is smaller. Three small scale full span beam specimens are tested under cyclic loading to investigate the LTB resistant and the ductility of the proposed design method. Test results show that the LTB deformation can be effectively suppressed and very high ductility level can be achieved. Following the test, a finite element analysis (FEA) model is established and is verified using the test results. An intensive parametric study is conducted using the established FEA model. The analysis reveals that the length of side plates is the most important parameter determining the performance of the beam and the required side plates length is determined by some parameters which are (1) beam depth to flange width ratio, (2) beam slenderness ratio (3) strength and thickness of the side plates, (4) compactness of beam web and flange, and (5) beam yield strength. At the end of the paper, a design formula to calculate the required side plate length is suggested.Keywords: cover plate, earthquake resistant design, lateral torsional buckling, side plate, steel structure
Procedia PDF Downloads 1742851 Magnetic SF (Silk Fibroin) E-Gel Scaffolds Containing bFGF-Conjugated Fe3O4 Nanoparticles
Authors: Z. Karahaliloğlu, E. Yalçın, M. Demirbilek, E.B. Denkbaş
Abstract:
Critical-sized bone defects caused by trauma, bone diseases, prosthetic implant revision or tumor excision cannot be repaired by physiological regenerative processes. Current orthopedic applications for critical-sized bone defects are to use autologous bone grafts, bone allografts, or synthetic graft materials. However, these strategies are unable to solve completely the problem, and motivate the development of novel effective biological scaffolds for tissue engineering applications and regenerative medicine applications. In particular, scaffolds combined with a variety of bio-agents as fundamental tools emerge to provide the regeneration of damaged bone tissues due to their ability to promote cell growth and function. In this study, a magnetic silk fibroin (SF) hydrogel scaffold was prepared by electrogelation process of the concentrated Bombxy mori silk fibroin (8 %wt) aqueous solution. For enhancement of osteoblast-like cells (SaOS-2) growth and adhesion, basal fibroblast growth factor (bFGF) were conjugated physically to the HSA-coated magnetic nanoparticles (Fe3O4) and magnetic SF e-gel scaffolds were prepared by incorporation of Fe3O4, HSA (human serum albumin)=Fe3O4 and HSA=Fe3O4-bFGF nanoparticles. HSA=Fe3O4, HSA=Fe3O4-bFGF loaded and bare SF e-gels scaffolds were characterized using scanning electron microscopy (SEM.) For cell studies, human osteoblast-like cell line (SaOS-2) was used and an MTT assay was used to assess the cytotoxicity of magnetic silk fibroin e-gel scaffolds and cell density on these surfaces. For the evaluation osteogenic activation, ALP (alkaline phosphatase), the amount of mineralized calcium, total protein and collagen were studied. Fe3O4 nanoparticles were successfully synthesized and bFGF was conjugated to HSA=Fe3O4 nanoparticles with %97.5 of binding yield which has a particle size of 71.52±2.3 nm. Electron microscopy images of the prepared HSA and bFGF incorporated SF e-gel scaffolds showed a 3D porous morphology. In terms of water uptake results, bFGF conjugated HSA=Fe3O4 nanoparticles has the best water absorbability behavior among all groups. In the in-vitro cell culture studies realized using SaOS-2 cell line, the coating of Fe3O4 nanoparticles surface with a protein enhance the cell viability and HSA coating and bFGF conjugation, the both have an inductive effect in the cell proliferation. One of the markers of bone formation and osteoblast differentiation, according to the ALP activity and total protein results, HSA=Fe3O4-bFGF loaded SF e-gels had significantly enhanced ALP activity. Osteoblast cultured HSA=Fe3O4-bFGF loaded SF e-gels deposited more calcium compared with SF e-gel. The proposed magnetic scaffolds seem to be promising for bone tissue regeneration and used in future work for various applications.Keywords: basic fibroblast growth factor (bFGF), e-gel, iron oxide nanoparticles, silk fibroin
Procedia PDF Downloads 2842850 A Comparative Study on Deep Learning Models for Pneumonia Detection
Authors: Hichem Sassi
Abstract:
Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.Keywords: deep learning, computer vision, pneumonia, models, comparative study
Procedia PDF Downloads 642849 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment
Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati
Abstract:
This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.Keywords: real-time system (RTS), time utility function/ utility accrual (TUF/UA) scheduling, backward recovery mechanism, multiprocessor, discrete event simulation (DES)
Procedia PDF Downloads 3042848 Evaluation of Cyclic Steam Injection in Multi-Layered Heterogeneous Reservoir
Authors: Worawanna Panyakotkaew, Falan Srisuriyachai
Abstract:
Cyclic steam injection (CSI) is a thermal recovery technique performed by injecting periodically heated steam into heavy oil reservoir. Oil viscosity is substantially reduced by means of heat transferred from steam. Together with gas pressurization, oil recovery is greatly improved. Nevertheless, prediction of effectiveness of the process is difficult when reservoir contains degree of heterogeneity. Therefore, study of heterogeneity together with interest reservoir properties must be evaluated prior to field implementation. In this study, thermal reservoir simulation program is utilized. Reservoir model is firstly constructed as multi-layered with coarsening upward sequence. The highest permeability is located on top layer with descending of permeability values in lower layers. Steam is injected from two wells located diagonally in quarter five-spot pattern. Heavy oil is produced by adjusting operating parameters including soaking period and steam quality. After selecting the best conditions for both parameters yielding the highest oil recovery, effects of degree of heterogeneity (represented by Lorenz coefficient), vertical permeability and permeability sequence are evaluated. Surprisingly, simulation results show that reservoir heterogeneity yields benefits on CSI technique. Increasing of reservoir heterogeneity impoverishes permeability distribution. High permeability contrast results in steam intruding in upper layers. Once temperature is cool down during back flow period, condense water percolates downward, resulting in high oil saturation on top layers. Gas saturation appears on top after while, causing better propagation of steam in the following cycle due to high compressibility of gas. Large steam chamber therefore covers most of the area in upper zone. Oil recovery reaches approximately 60% which is of about 20% higher than case of heterogeneous reservoir. Vertical permeability exhibits benefits on CSI. Expansion of steam chamber occurs within shorter time from upper to lower zone. For fining upward permeability sequence where permeability values are reversed from the previous case, steam does not override to top layers due to low permeability. Propagation of steam chamber occurs in middle of reservoir where permeability is high enough. Rate of oil recovery is slower compared to coarsening upward case due to lower permeability at the location where propagation of steam chamber occurs. Even CSI technique produces oil quite slowly in early cycles, once steam chamber is formed deep in the reservoir, heat is delivered to formation quickly in latter cycles. Since reservoir heterogeneity is unavoidable, a thorough understanding of its effect must be considered. This study shows that CSI technique might be one of the compatible solutions for highly heterogeneous reservoir. This competitive technique also shows benefit in terms of heat consumption as steam is injected periodically.Keywords: cyclic steam injection, heterogeneity, reservoir simulation, thermal recovery
Procedia PDF Downloads 4572847 Vertical Village Buildings as Sustainable Strategy to Re-Attract Mega-Cities in Developing Countries
Authors: M. J. Eichner, Y. S. Sarhan
Abstract:
Overall study purpose has been the evaluation of ‘Vertical Villages’ as a new sustainable building typology, reducing significantly negative impacts of rapid urbanization processes in third world capital cities. Commonly in fast-growing cities, housing and job supply, educational and recreational opportunities, as well as public transportation infrastructure, are not accommodating rapid population growth, exposing people to high noise and emission polluted living environments with low-quality neighborhoods and a lack of recreational areas. Like many others, Egypt’s capital city Cairo, according to the UN facing annual population growth rates of up to 428.000 people, is struggling to address the general deterioration of urban living conditions. New settlements typologies and urban reconstruction approach hardly follow sustainable urbanization principles or socio-ecologic urbanization models with severe effects not only for inhabitants but also for the local environment and global climate. The authors prove that ‘Vertical Village’ buildings can offer a sustainable solution for increasing urban density with at the same time improving the living quality and urban environment significantly. Inserting them within high-density urban fabrics the ecologic and socio-cultural conditions of low-quality neighborhoods can be transformed towards districts, considering all needs of sustainable and social urban life. This study analyzes existing building typologies in Cairo’s «low quality - high density» districts Ard el Lewa, Dokki and Mohandesen according to benchmarks for sustainable residential buildings, identifying major problems and deficits. In 3 case study design projects, the sustainable transformation potential through ‘Vertical Village’ buildings are laid out and comparative studies show the improvement of the urban microclimate, safety, social diversity, sense of community, aesthetics, privacy, efficiency, healthiness and accessibility. The main result of the paper is that the disadvantages of density and overpopulation in developing countries can be converted with ‘Vertical Village’ buildings into advantages, achieving attractive and environmentally friendly living environments with multiple synergies. The paper is documenting based on scientific criteria that mixed-use vertical building structures, designed according to sustainable principles of low rise housing, can serve as an alternative to convert «low quality - high density» districts in megacities, opening a pathway for governments to achieve sustainable urban transformation goals. Neglected informal urban districts, home to millions of the poorer population groups, can be converted into healthier living and working environments.Keywords: sustainable, architecture, urbanization, urban transformation, vertical village
Procedia PDF Downloads 1232846 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 412845 Modeling Pronunciations of Arab Broca’s Aphasics Using Mosstalk Words Technique
Authors: Sadeq Al Yaari, Fayza Alhammadi, Ayman Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Adham Al Yaari, Sajedah Al Yaari, Saleh Al Yami
Abstract:
Background: There has been a debate in the literature over the years as to whether or not MossTalk Words program fits Arab Broca’s aphasics (BAs) due to that language differences and also the fact that the technique has not yet been used for aphasics with semantic dementia (SD aphasics). Aims: To oversimplify the above mentioned debate slightly for purposes of exposition, the purpose of the present study is to investigate the “usability” of this program as well as pictures and community as therapeutic techniques for both Arab BAs and SD aphasics. Method: The subjects of this study are two Saudi aphasics (53 and 57 years old, respectively). The former suffers from Broca’s aphasia due to a stroke, while the latter suffers from semantic dementia. Both aphasics can speak English and have used the Moss Talk Words program in addition to intensive picture-naming therapeutic sessions for two years. They were tested by one of the researchers four times (a time per six months). The families of the two subjects, in addition to their relatives and friends, played a major part in all therapeutic sessions. Conclusion: Results show that in averages across the entire therapeutic sessions, MossTalk Words program was clearly found more effective in modeling BAs’ pronunciation than that of SD aphasic. Furthermore, picture-naming intensive exercises in addition to the positive role of the community members played a major role in the progress of the two subjects’ performance.Keywords: moss talk words, program, technique, Broca’s aphasia, semantic dementia, subjects, picture, community
Procedia PDF Downloads 432844 A Case of Bilateral Vulval Abscess with Pelvic Fistula in an Immunocompromised Patient with Colostomy: A Diagnostic Challenge
Authors: Paul Feyi Waboso
Abstract:
This case report presents a 57-year-old female patient with a history of colon cancer, colostomy, and immunocompromise, who presented with an unusual bilateral vulval abscess, more prominent on the left side. Due to the atypical presentation, an MRI was performed, revealing a pelvic collection and a fistulous connection between the pelvis and vulva. This finding prompted an urgent surgical intervention. This case highlights the diagnostic and therapeutic challenges of managing complex abscesses and fistulas in immunocompromised patients. Introduction: Vulval abscesses in immunocompromised individuals can present with atypical features and may be associated with complex pathologies. Patients with a history of cancer, colostomy, and immunocompromise are particularly prone to infections and may present with unusual manifestations. This report discusses a case of a large bilateral vulval abscess with an underlying pelvic fistula, emphasizing the importance of advanced imaging in cases with atypical presentations. Case Presentation: A 57-year-old female with a known history of colon cancer, treated with colostomy, presented with severe pain and swelling in the vulval area. Physical examination revealed bilateral vulval swelling, with the abscess on the left side appearing larger and more pronounced than on the right. Given her immunocompromised status and the unusual nature of the presentation, we requested an MRI of the pelvis, suspecting an underlying pathology beyond a typical abscess. Investigations: MRI imaging revealed a significant pelvic collection and identified a fistulous tract between the pelvis and the vulva. This confirmed that the vulval abscess was connected to a deeper pelvic infection, necessitating urgent intervention. Management: After consultation with the multidisciplinary team (MDT), it was agreed that the patient required surgical intervention, having had 48 hours of antibiotics. The patient underwent evacuation of the left-sided vulval abscess under spinal anesthesia. During surgery, the pelvic collection was drained of 200 ml of pus. Outcome and Follow-Up: Postoperative recovery was closely monitored due to the patient’s immunocompromised state. Follow-up imaging and clinical evaluation showed improvement in symptoms, with gradual resolution of infection. The patient was scheduled for regular follow-up visits to monitor for recurrence or further complications. Discussion: Bilateral vulval abscesses are uncommon and, in an immunocompromised patient, warrant thorough investigation to rule out deeper infectious or fistulous connections. This case underscores the utility of MRI in identifying complex fistulous tracts and highlights the importance of a multidisciplinary approach in managing such high-risk patients. Conclusion: This case illustrates a rare presentation of bilateral vulval abscess with an associated pelvic fistula.Keywords: vulval abscess, MDT team, colon cancer with pelvic fistula, vulval skin condition
Procedia PDF Downloads 152843 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making
Authors: Serhat Tuzun, Tufan Demirel
Abstract:
Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy
Procedia PDF Downloads 2242842 Tailorability of Poly(Aspartic Acid)/BSA Complex by Self-Assembling in Aqueous Solutions
Authors: Loredana E. Nita, Aurica P. Chiriac, Elena Stoleru, Alina Diaconu, Tudorachi Nita
Abstract:
Self-assembly processes are an attractive method to form new and complex structures between macromolecular compounds to be used for specific applications. In this context, intramolecular and intermolecular bonds play a key role during self-assembling processes in preparation of carrier systems of bioactive substances. Polyelectrolyte complexes (PECs) are formed through electrostatic interactions, and though they are significantly below of the covalent linkages in their strength, these complexes are sufficiently stable owing to the association processes. The relative ease way of PECs formation makes from them a versatile tool for preparation of various materials, with properties that can be tuned by adjusting several parameters, such as the chemical composition and structure of polyelectrolytes, pH and ionic strength of solutions, temperature and post-treatment procedures. For example, protein-polyelectrolyte complexes (PPCs) are playing an important role in various chemical and biological processes, such as protein separation, enzyme stabilization and polymer drug delivery systems. The present investigation is focused on evaluation of the PPC formation between a synthetic polypeptide (poly(aspartic acid) – PAS) and a natural protein (bovine serum albumin - BSA). The PPC obtained from PAS and BSA in different ratio was investigated by corroboration of various techniques of characterization as: spectroscopy, microscopy, thermo-gravimetric analysis, DLS and zeta potential determination, measurements which were performed in static and/or dynamic conditions. The static contact angle of the sample films was also determined in order to evaluate the changes brought upon surface free energy of the prepared PPCs in interdependence with the complexes composition. The evolution of hydrodynamic diameter and zeta potential of the PPC, recorded in situ, confirm changes of both co-partners conformation, a 1/1 ratio between protein and polyelectrolyte being benefit for the preparation of a stable PPC. Also, the study evidenced the dependence of PPC formation on the temperature of preparation. Thus, at low temperatures the PPC is formed with compact structure, small dimension and hydrodynamic diameter, close to those of BSA. The behavior at thermal treatment of the prepared PPCs is in agreement with the composition of the complexes. From the contact angle determination results the increase of the PPC films cohesion, which is higher than that of BSA films. Also, a higher hydrophobicity corresponds to the new PPC films denoting a good adhesion of the red blood cells onto the surface of PSA/BSA interpenetrated systems. The SEM investigation evidenced as well the specific internal structure of PPC concretized in phases with different size and shape in interdependence with the interpolymer mixture composition.Keywords: polyelectrolyte – protein complex, bovine serum albumin, poly(aspartic acid), self-assembly
Procedia PDF Downloads 2442841 Creative Element Analysis of Machinery Creativity Contest Works
Authors: Chin-Pin, Chen, Shi-Chi, Shiao, Ting-Hao, Lin
Abstract:
Current industry is facing the rapid development of new technology in the world and fierce changes of economic environment in the society so that the industry development trend gradually does not focus on labor, but leads the industry and the academic circle with innovation and creativity. The development trend in machinery industry presents the same situation. Based on the aim of Creativity White Paper, Ministry of Education in Taiwan promotes and develops various creativity contests to cope with the industry trend. Domestic students and enterprises have good performance on domestic and international creativity contests in recent years. There must be important creative elements in such creative works to win the award among so many works. Literature review and in-depth interview with five creativity contest awarded instructors are first proceeded to conclude 15 machinery creative elements, which are further compared with the creative elements of machinery awarded creative works in past five years to understand the relationship between awarded works and creative elements. The statistical analysis results show that IDEA (Industrial Design Excellence Award) contains the most creative elements among four major international creativity contests. That is, most creativity review focuses on creative elements that are comparatively stricter. Concerning the groups participating in creativity contests, enterprises consider more creative elements of the creative works than other two elements for contests. From such contest works, creative elements of “replacement or improvement”, “convenience”, and “modeling” present higher significance. It is expected that the above findings could provide domestic colleges and universities with reference for participating in creativity related contests in the future.Keywords: machinery, creative elements, creativity contest, creativity works
Procedia PDF Downloads 4402840 Sorption Properties of Biological Waste for Lead Ions from Aqueous Solutions
Authors: Lucia Rozumová, Ivo Šafařík, Jana Seidlerová, Pavel Kůs
Abstract:
Biosorption by biological waste materials from agriculture industry could be a cost-effective technique for removing metal ions from wastewater. The performance of new biosorbent systems, consisting of the waste matrixes which were magnetically modified by iron oxide nanoparticles, for the removal of lead ions from an aqueous solution was tested. The use of low-cost and eco-friendly adsorbents has been investigated as an ideal alternative to the current expensive methods. This article deals with the removal of metal ions from aqueous solutions by modified waste products - orange peels, sawdust, peanuts husks, used tea leaves and ground coffee sediment. Magnetically modified waste materials were suspended in methanol and then was added ferrofluid (magnetic iron oxide nanoparticles). This modification process gives the predictions for the formation of the smart materials with new properties. Prepared material was characterized by using scanning electron microscopy, specific surface area and pore size analyzer. Studies were focused on the sorption and desorption properties. The changes of iron content in magnetically modified materials after treatment were observed as well. Adsorption process has been modelled by adsorption isotherms. The results show that magnetically modified materials during the dynamic sorption and desorption are stable at the high adsorbed amount of lead ions. The results of this study indicate that the biological waste materials as sorbent with new properties are highly effective for the treatment of wastewater.Keywords: biological waste, sorption, metal ions, ferrofluid
Procedia PDF Downloads 141