Search results for: multiple layers nonwoven
4970 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View
Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol
Abstract:
Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties
Procedia PDF Downloads 2864969 Investigate the Effects of Geometrical Structure and Layer Orientation on Strength of 3D-FDM Rapid Prototyped Samples
Authors: Ahmed A.D. Sarhan, Chong Feng Duan, Mum Wai Yip, M. Sayuti
Abstract:
Rapid Prototyping (RP) technologies enable physical parts to be produced from various materials without depending on the conventional tooling. Fused Deposition Modeling (FDM) is one of the famous RP processes used at present. Tensile strength and compressive strength resistance will be identified for different sample structures and different layer orientations of ABS rapid prototype solid models. The samples will be fabricated by a FDM rapid prototyping machine in different layer orientations with variations in internal geometrical structure. The 0° orientation where layers were deposited along the length of the samples displayed superior strength and impact resistance over all the other orientations. The anisotropic properties were probably caused by weak interlayer bonding and interlayer porosity.Keywords: building orientation, compression strength, rapid prototyping, tensile strength
Procedia PDF Downloads 6944968 The Mediating Role of Store Personality in the Relationship Between Self-Congruity and Manifestations of Loyalty
Authors: María de los Ángeles Crespo López, Carmen García García
Abstract:
The highly competitive nature of today's globalised marketplace requires that brands and stores develop effective commercial strategies to ensure their economic survival. Maintaining the loyalty of existing customers constitutes one key strategy that yields the best results. Although the relationship between consumers' self-congruity and their manifestations of loyalty towards a store has been investigated, the role of store personality in this relationship remains unclear. In this study, multiple parallel mediation analysis was used to examine the effect of Store Personality on the relationship between Self-Congruity of consumers and their Manifestations of Loyalty. For this purpose, 457 Spanish consumers of the Fnac store completed three self-report questionnaires assessing Store Personality, Self-Congruity, and Store Loyalty. The data were analyzed using the SPSS macro PROCESS. The results revealed that three dimensions of Store Personality, namely Exciting, Close and Competent Store, positively and significantly mediated the relationship between Self-Congruity and Manifestations of Loyalty. The indirect effect of Competent Store was the greatest. This means that a consumer with higher levels of Self-Congruity with the store will exhibit more Manifestations of Loyalty when the store is perceived as Exciting, Close or Competent. These findings suggest that more attention should be paid to the perceived personality of stores for the development of effective marketing strategies to maintain or increase consumers' manifestations of loyalty towards stores.Keywords: multiple parallel mediation, PROCESS, self-congruence, store loyalty, store personality
Procedia PDF Downloads 1564967 Investigating the Potential for Introduction of Warm Mix Asphalt in Kuwait Using the Volcanic Ash
Authors: H. Al-Baghli, F. Al-Asfour
Abstract:
The current applied asphalt technology for Kuwait roads pavement infrastructure is the hot mix asphalt (HMA) pavement, including both pen grade and polymer modified bitumen (PMBs), that is produced and compacted at high temperature levels ranging from 150 to 180 °C. There are no current specifications for warm and cold mix asphalts in Kuwait’s Ministry of Public Works (MPW) asphalt standard and specifications. The process of the conventional HMA is energy intensive and directly responsible for the emission of greenhouse gases and other environmental hazards into the atmosphere leading to significant environmental impacts and raising health risk to labors at site. Warm mix asphalt (WMA) technology, a sustainable alternative preferred in multiple countries, has many environmental advantages because it requires lower production temperatures than HMA by 20 to 40 °C. The reduction of temperatures achieved by WMA originates from multiple technologies including foaming and chemical or organic additives that aim to reduce bitumen and improve mix workability. This paper presents a literature review of WMA technologies and techniques followed by an experimental study aiming to compare the results of produced WMA samples, using a water containing additive (foaming process), at different compaction temperatures with the HMA control volumetric properties mix designed in accordance to the new MPW’s specifications and guidelines.Keywords: warm-mix asphalt, water-bearing additives, foaming-based process, chemical additives, organic additives
Procedia PDF Downloads 1234966 The Relationship between Corporate Governance and Intellectual Capital Disclosure: Malaysian Evidence
Authors: Rabiaal Adawiyah Shazali, Corina Joseph
Abstract:
The disclosure of Intellectual Capital (IC) information is getting more vital in today’s era of a knowledge-based economy. Companies are advised by accounting bodies to enhance IC disclosure which complements the conventional financial disclosures. There are no accounting standards for Intellectual Capital Disclosure (ICD), therefore the disclosure is entirely voluntary. Hence, this study aims to investigate the extent of ICD and to examine the relationship between corporate governance and ICD in Malaysia. This study employed content analysis of 100 annual reports by the top 100 public listed companies in Malaysia during 2012. The uniqueness of this study lies on its underpinning theory used where it applies the institutional isomorphism theory to support the effect of the attributes of corporate governance towards ICD. In order to achieve the stated objective, multiple regression analysis were employed to conduct this study. From the descriptive statistics, it was concluded that public listed companies in Malaysia have increased their awareness towards the importance of ICD. Furthermore, results from the multiple regression analysis confirmed that corporate governance affects the company’s ICD where the frequency of audit committee meetings and the board size has positively influenced the level of ICD in companies. Findings from this study would provide an incentive for companies in Malaysia to enhance the disclosure of IC. In addition, this study would assist Bursa Malaysia and other regulatory bodies to come up with a proper guideline for the disclosure of IC.Keywords: annual report, content analysis, corporate governance, intellectual capital disclosure
Procedia PDF Downloads 2154965 Case of A Huge Retroperitoneal Abscess Spanning from the Diaphragm to the Pelvic Brim
Authors: Christopher Leung, Tony Kim, Rebecca Lendzion, Scott Mackenzie
Abstract:
Retroperitoneal abscesses are a rare but serious condition with often delayed diagnosis, non-specific symptoms, multiple causes and high morbidity/mortality. With the advent of more readily available cross-sectional imaging, retroperitoneal abscesses are treated earlier and better outcomes are achieved. Occasionally, a retroperitoneal abscess is present as a huge retroperitoneal abscess, as evident in this 53-year-old male. With a background of chronic renal disease and left partial nephrectomy, this gentleman presented with a one-month history of left flank pain without any other symptoms, including fevers or abdominal pain. CT abdomen and pelvis demonstrated a huge retroperitoneal abscess spanning from the diaphragm, abutting the spleen, down to the iliopsoas muscle and abutting the iliac vessels at the pelvic brim. This large retroperitoneal abscess required open drainage as well as drainage by interventional radiology. A long course of intravenous antibiotics and multiple drainages was required to drain the abscess. His blood culture and fluid culture grew Proteus species suggesting a urinary source, likely from his non-functioning kidney, which had a partial nephrectomy. Such a huge retroperitoneal abscess has rarely been described in the literature. The learning point here is that the basic principle of source control and antibiotics is paramount in treating retroperitoneal abscesses regardless of the size of the abscess.Keywords: retroperitoneal abscess, retroperitoneal mass, sepsis, genitourinary infection
Procedia PDF Downloads 2204964 Knowledge Management and Administrative Effectiveness of Non-teaching Staff in Federal Universities in the South-West, Nigeria
Authors: Nathaniel Oladimeji Dixon, Adekemi Dorcas Fadun
Abstract:
Educational managers have observed a downward trend in the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. This is evident in the low-quality service delivery of administrators and unaccomplished institutional goals and missions of higher education. Scholars have thus indicated the need for the deployment and adoption of a practice that encourages information collection and sharing among stakeholders with a view to improving service delivery and outcomes. This study examined the extent to which knowledge management correlated with the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. The study adopted the survey design. Three federal universities (the University of Ibadan, Federal University of Agriculture, Abeokuta, and Obafemi Awolowo University) were purposively selected because administrative ineffectiveness was more pronounced among non-teaching staff in government-owned universities, and these federal universities were long established. The proportional and stratified random sampling was adopted to select 1156 non-teaching staff across the three universities along the three existing layers of the non-teaching staff: secretarial (senior=311; junior=224), non-secretarial (senior=147; junior=241) and technicians (senior=130; junior=103). Knowledge Management Practices Questionnaire with four sub-scales: knowledge creation (α=0.72), knowledge utilization (α=0.76), knowledge sharing (α=0.79) and knowledge transfer (α=0.83); and Administrative Effectiveness Questionnaire with four sub-scales: communication (α=0.84), decision implementation (α=0.75), service delivery (α=0.81) and interpersonal relationship (α=0.78) were used for data collection. Data were analyzed using descriptive statistics, Pearson product-moment correlation and multiple regression at 0.05 level of significance, while qualitative data were content analyzed. About 59.8% of the non-teaching staff exhibited a low level of knowledge management. The indices of administrative effectiveness of non-teaching staff were rated as follows: service delivery (82.0%), communication (78.0%), decision implementation (71.0%) and interpersonal relationship (68.0%). Knowledge management had significant relationships with the indices of administrative effectiveness: service delivery (r=0.82), communication (r=0.81), decision implementation (r=0.80) and interpersonal relationship (r=0.47). Knowledge management had a significant joint prediction on administrative effectiveness (F (4;1151)= 0.79, R=0.86), accounting for 73.0% of its variance. Knowledge sharing (β=0.38), knowledge transfer (β=0.26), knowledge utilization (β=0.22), and knowledge creation (β=0.06) had relatively significant contributions to administrative effectiveness. Lack of team spirit and withdrawal syndrome is the major perceived constraints to knowledge management practices among the non-teaching staff. Knowledge management positively influenced the administrative effectiveness of the non-teaching staff in federal universities in South-west Nigeria. There is a need to ensure that the non-teaching staff imbibe team spirit and embrace teamwork with a view to eliminating their withdrawal syndromes. Besides, knowledge management practices should be deployed into the administrative procedures of the university system.Keywords: knowledge management, administrative effectiveness of non-teaching staff, federal universities in the south-west of nigeria., knowledge creation, knowledge utilization, effective communication, decision implementation
Procedia PDF Downloads 994963 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships
Authors: Vijaya Dixit Aasheesh Dixit
Abstract:
Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.Keywords: learning curve, materials management, shipbuilding, sister ships
Procedia PDF Downloads 5014962 An Overbooking Model for Car Rental Service with Different Types of Cars
Authors: Naragain Phumchusri, Kittitach Pongpairoj
Abstract:
Overbooking is a very useful revenue management technique that could help reduce costs caused by either undersales or oversales. In this paper, we propose an overbooking model for two types of cars that can minimize the total cost for car rental service. With two types of cars, there is an upgrade possibility for lower type to upper type. This makes the model more complex than one type of cars scenario. We have found that convexity can be proved in this case. Sensitivity analysis of the parameters is conducted to observe the effects of relevant parameters on the optimal solution. Model simplification is proposed using multiple linear regression analysis, which can help estimate the optimal overbooking level using appropriate independent variables. The results show that the overbooking level from multiple linear regression model is relatively close to the optimal solution (with the adjusted R-squared value of at least 72.8%). To evaluate the performance of the proposed model, the total cost was compared with the case where the decision maker uses a naïve method for the overbooking level. It was found that the total cost from optimal solution is only 0.5 to 1 percent (on average) lower than the cost from regression model, while it is approximately 67% lower than the cost obtained by the naïve method. It indicates that our proposed simplification method using regression analysis can effectively perform in estimating the overbooking level.Keywords: overbooking, car rental industry, revenue management, stochastic model
Procedia PDF Downloads 1704961 Classification of Land Cover Usage from Satellite Images Using Deep Learning Algorithms
Authors: Shaik Ayesha Fathima, Shaik Noor Jahan, Duvvada Rajeswara Rao
Abstract:
Earth's environment and its evolution can be seen through satellite images in near real-time. Through satellite imagery, remote sensing data provide crucial information that can be used for a variety of applications, including image fusion, change detection, land cover classification, agriculture, mining, disaster mitigation, and monitoring climate change. The objective of this project is to propose a method for classifying satellite images according to multiple predefined land cover classes. The proposed approach involves collecting data in image format. The data is then pre-processed using data pre-processing techniques. The processed data is fed into the proposed algorithm and the obtained result is analyzed. Some of the algorithms used in satellite imagery classification are U-Net, Random Forest, Deep Labv3, CNN, ANN, Resnet etc. In this project, we are using the DeepLabv3 (Atrous convolution) algorithm for land cover classification. The dataset used is the deep globe land cover classification dataset. DeepLabv3 is a semantic segmentation system that uses atrous convolution to capture multi-scale context by adopting multiple atrous rates in cascade or in parallel to determine the scale of segments.Keywords: area calculation, atrous convolution, deep globe land cover classification, deepLabv3, land cover classification, resnet 50
Procedia PDF Downloads 1374960 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 3594959 The Impact of Missense Mutation in Phosphatidylinositol Glycan Class A Associated to Paroxysmal Nocturnal Hemoglobinuria and Multiple Congenital Anomalies-Hypotonia-Seizures Syndrome 2: A Computational Study
Authors: Ashish Kumar Agrahari, Amit Kumar
Abstract:
Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal blood disorder that manifests with hemolytic anemia, thrombosis, and peripheral blood cytopenias. The disease is caused by the deficiency of two glycosylphosphatidylinositols (GPI)-anchored proteins (CD55 and CD59) in the hemopoietic stem cells. The deficiency of GPI-anchored proteins has been associated with the somatic mutations in phosphatidylinositol glycan class A (PIGA). However, the mutations that do not cause PNH is associated with the multiple congenital anomalies-hypotonia-seizures syndrome 2 (MCAHS2). To best of our knowledge, no computational study has been performed to explore the atomistic level impact of PIGA mutations on the structure and dynamics of the protein. In the current work, we are mainly interested to get insights into the molecular mechanism of PIGA mutations. In the initial step, we screened the most pathogenic mutations from the pool of publicly available mutations. Further, to get a better understanding, pathogenic mutations were mapped to the modeled structure and subjected to 50ns molecular dynamics simulation. Our computational study suggests that four mutations are highly vulnerable to altering the structural conformation and stability of the PIGA protein, which illustrates its association with PNH and MCAHS2 phenotype.Keywords: homology modeling, molecular dynamics simulation, missense mutations PNH, MCAHS2, PIGA
Procedia PDF Downloads 1434958 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning
Authors: Pooja Khanal, Huaming Zhang
Abstract:
Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.Keywords: bug classification, bug labels, GitHub issues, semantic differences
Procedia PDF Downloads 1984957 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn will result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers, and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision according to Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). In this paper, we propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method shows better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.Keywords: handover, HetNets, interference, MADM, small cells, TOPSIS, weight
Procedia PDF Downloads 1484956 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable
Authors: Xinyuan Y. Song, Kai Kang
Abstract:
Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data
Procedia PDF Downloads 1424955 Impact of Boundary Conditions on the Behavior of Thin-Walled Laminated Column with L-Profile under Uniform Shortening
Authors: Jaroslaw Gawryluk, Andrzej Teter
Abstract:
Simply supported angle columns subjected to uniform shortening are tested. The experimental studies are conducted on a testing machine using additional Aramis and the acoustic emission system. The laminate samples are subjected to axial uniform shortening. The tested columns are loaded with the force values from zero to the maximal load destroying the L-shaped column, which allowed one to observe the column post-buckling behavior until its collapse. Laboratory tests are performed at a constant velocity of the cross-bar equal to 1 mm/min. In order to eliminate stress concentrations between sample and support, flexible pads are used. Analyzed samples are made with carbon-epoxy laminate using the autoclave method. The configurations of laminate layers are: [60,0₂,-60₂,60₃,-60₂,0₃,-60₂,0,60₂]T, where direction 0 is along the length of the profile. Material parameters of laminate are: Young’s modulus along the fiber direction - 170GPa, Young’s modulus along the fiber transverse direction - 7.6GPa, shear modulus in-plane - 3.52GPa, Poisson’s ratio in-plane - 0.36. The dimensions of all columns are: length-300 mm, thickness-0.81mm, width of the flanges-40mm. Next, two numerical models of the column with and without flexible pads are developed using the finite element method in Abaqus software. The L-profile laminate column is modeled using the S8R shell elements. The layup-ply technique is used to define the sequence of the laminate layers. However, the model of grips is made of the R3D4 discrete rigid elements. The flexible pad is consists of the C3D20R type solid elements. In order to estimate the moment of the first laminate layer damage, the following initiation criteria were applied: maximum stress criterion, Tsai-Hill, Tsai-Wu, Azzi-Tsai-Hill, and Hashin criteria. The best compliance of results was observed for the Hashin criterion. It was found that the use of the pad in the numerical model significantly influences the damage mechanism. The model without pads characterized a much more stiffness, as evidenced by a greater bifurcation load and damage initiation load in all analyzed criteria, lower shortening, and less deflection of the column in its center than the model with flexible pads. Acknowledgment: The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: angle column, compression, experiment, FEM
Procedia PDF Downloads 2054954 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics
Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí
Abstract:
A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding
Procedia PDF Downloads 944953 Geometrically Linear Symmetric Free Vibration Analysis of Sandwich Beam
Authors: Ibnorachid Zakaria, El Bikri Khalid, Benamar Rhali, Farah Abdoun
Abstract:
The aim of the present work is to study the linear free symmetric vibration of three-layer sandwich beam using the energy method. The zigzag model is used to describe the displacement field. The theoretical model is based on the top and bottom layers behave like Euler-Bernoulli beams while the core layer like a Timoshenko beam. Based on Hamilton’s principle, the governing equation of motion sandwich beam is obtained in order to calculate the linear frequency parameters for a clamped-clamped and simple supported-simple-supported beams. The effects of material properties and geometric parameters on the natural frequencies are also investigated.Keywords: linear vibration, sandwich, shear deformation, Timoshenko zig-zag model
Procedia PDF Downloads 4694952 Effect of Climate Variability on Honeybee's Production in Ondo State, Nigeria
Authors: Justin Orimisan Ijigbade
Abstract:
The study was conducted to assess the effect of climate variability on honeybee’s production in Ondo State, Nigeria. Multistage sampling technique was employed to collect the data from 60 beekeepers across six Local Government Areas in Ondo State. Data collected were subjected to descriptive statistics and multiple regression model analyses. The results showed that 93.33% of the respondents were male with 80% above 40 years of age. Majority of the respondents (96.67%) had formal education and 90% produced honey for commercial purpose. The result revealed that 90% of the respondents admitted that low temperature as a result of long hours/period of rainfall affected the foraging efficiency of the worker bees, 73.33% claimed that long period of low humidity resulted in low level of nectar flow, while 70% submitted that high temperature resulted in improper composition of workers, dunes and queen in the hive colony. The result of multiple regression showed that beekeepers’ experience, educational level, access to climate information, temperature and rainfall were the main factors affecting honey bees production in the study area. Therefore, beekeepers should be given more education on climate variability and its adaptive strategies towards ensuring better honeybees production in the study area.Keywords: climate variability, honeybees production, humidity, rainfall and temperature
Procedia PDF Downloads 2714951 A Framework for Designing Complex Product-Service Systems with a Multi-Domain Matrix
Authors: Yoonjung An, Yongtae Park
Abstract:
Offering a Product-Service System (PSS) is a well-accepted strategy that companies may adopt to provide a set of systemic solutions to customers. PSSs were initially provided in a simple form but now take diversified and complex forms involving multiple services, products and technologies. With the growing interest in the PSS, frameworks for the PSS development have been introduced by many researchers. However, most of the existing frameworks fail to examine various relations existing in a complex PSS. Since designing a complex PSS involves full integration of multiple products and services, it is essential to identify not only product-service relations but also product-product/ service-service relations. It is also equally important to specify how they are related for better understanding of the system. Moreover, as customers tend to view their purchase from a more holistic perspective, a PSS should be developed based on the whole system’s requirements, rather than focusing only on the product requirements or service requirements. Thus, we propose a framework to develop a complex PSS that is coordinated fully with the requirements of both worlds. Specifically, our approach adopts a multi-domain matrix (MDM). A MDM identifies not only inter-domain relations but also intra-domain relations so that it helps to design a PSS that includes highly desired and closely related core functions/ features. Also, various dependency types and rating schemes proposed in our approach would help the integration process.Keywords: inter-domain relations, intra-domain relations, multi-domain matrix, product-service system design
Procedia PDF Downloads 6384950 Research of Interaction between Layers of Compressed Composite Columns
Authors: Daumantas Zidanavicius
Abstract:
In order to investigate the bond between concrete and steel in the circular steel tube column filled with concrete, the 7 series of specimens were tested with the same geometrical parameters but different concrete properties. Two types of specimens were chosen. For the first type, the expansive additives to the concrete mixture were taken to increase internal forces. And for the second type, mechanical components were used. All 7 series of the short columns were modeled by FEM and tested experimentally. In the work, big attention was taken to the bond-slip models between steel and concrete. Results show that additives to concrete let increase the bond strength up to two times and the mechanical anchorage –up to 6 times compared to control specimens without additives and anchorage.Keywords: concrete filled steel tube, push-out test, bond slip relationship, bond stress distribution
Procedia PDF Downloads 1234949 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT
Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez
Abstract:
Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management
Procedia PDF Downloads 1374948 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3314947 Targeting Calcium Dysregulation for Treatment of Dementia in Alzheimer's Disease
Authors: Huafeng Wei
Abstract:
Dementia in Alzheimer’s Disease (AD) is the number one cause of dementia internationally, without effective treatments. Increasing evidence suggest that disruption of intracellular calcium homeostasis, primarily pathological elevation of cytosol and mitochondria but reduction of endoplasmic reticulum (ER) calcium concentrations, play critical upstream roles on multiple pathologies and associated neurodegeneration, impaired neurogenesis, synapse, and cognitive dysfunction in various AD preclinical studies. The last federal drug agency (FDA) approved drug for AD dementia treatment, memantine, exert its therapeutic effects by ameliorating N-methyl-D-aspartate (NMDA) glutamate receptor overactivation and subsequent calcium dysregulation. More research works are needed to develop other drugs targeting calcium dysregulation at multiple pharmacological acting sites for future effective AD dementia treatment. Particularly, calcium channel blockers for the treatment of hypertension and dantrolene for the treatment of muscle spasm and malignant hyperthermia can be repurposed for this purpose. In our own research work, intranasal administration of dantrolene significantly increased its brain concentrations and durations, rendering it a more effective therapeutic drug with less side effects for chronic AD dementia treatment. This review summarizesthe progress of various studies repurposing drugs targeting calcium dysregulation for future effective AD dementia treatment as potentially disease-modifying drugs.Keywords: alzheimer, calcium, cognitive dysfunction, dementia, neurodegeneration, neurogenesis
Procedia PDF Downloads 1804946 Resistance Spot Welding of Boron Steel 22MnB5 with Complex Welding Programs
Authors: Szymon Kowieski, Zygmunt Mikno
Abstract:
The study involved the optimization of process parameters during resistance spot welding of Al-coated martensitic boron steel 22MnB5, applied in hot stamping, performed using a programme with a multiple current impulse mode and a programme with variable pressure force. The aim of this research work was to determine the possibilities of a growth in welded joint strength and to identify the expansion of a welding lobe. The process parameters were adjusted on the basis of welding process simulation and confronted with experimental data. 22MnB5 steel is known for its tendency to obtain high hardness values in weld nuggets, often leading to interfacial failures (observed in the study-related tests). In addition, during resistance spot welding, many production-related factors can affect process stability, e.g. welding lobe narrowing, and lead to the deterioration of quality. Resistance spot welding performed using the above-named welding programme featuring 3 levels of force made it possible to achieve 82% of welding lobe extension. Joints made using the multiple current impulse program, where the total welding time was below 1.4s, revealed a change in a peeling mode (to full plug) and an increase in weld tensile shear strength of 10%.Keywords: 22MnB5, hot stamping, interfacial fracture, resistance spot welding, simulation, single lap joint, welding lobe
Procedia PDF Downloads 3854945 Bonding Characteristics Between FRP and Concrete Substrates
Authors: Houssam A. Toutanji, Meng Han
Abstract:
This study focuses on the development of a fracture mechanics based-model that predicts the debonding behavior of FRP strengthened RC beams. In this study, a database includes 351 concrete prisms bonded with FRP plates tested in single and double shear were prepared. The existing fracture-mechanics-based models are applied to this database. Unfortunately the properties of adhesive layer, especially a soft adhesive layer, used on the specimens in the existing studies were not always able to found. Thus, the new model’s proposal was based on fifteen newly conducted pullout tests and twenty four data selected from two independent existing studies with the application of a soft adhesive layers and the availability of adhesive properties.Keywords: carbon fiber composite materials, interface response, fracture characteristics, maximum shear stress, ultimate transferable load
Procedia PDF Downloads 2664944 Valorization of Clay Material in the Road Sector By Adding Granulated Recycled Plastic
Authors: Ouaaz Oum Essaad, Melbouci Bachir
Abstract:
The experimental study conducted has a dual purpose: to valorize the clay material in the road domain and improve the lift of the shape layers by strengthening with plastic waste (in the form of aggregates). To do this, six mixtures of Clay and sand of different percentages were studied: 100% Clay, 95% Clay + 05% Sand, 90% Clay + 10% Sand, 85% Clay + 15% Sand, 80% Clay + 20% Sand, 75% Clay + 25% Sand. Proctor compaction and simple compression tests have been carried out on mixtures (sand + clay + plastic waste). The results obtained show a clear evolution of the characteristics of the Proctor test and the compressive strength of the mixtures according to the different types and percentages of the recycled plastic Plasticity and consistency index are important parameters that play a role in the toughness of plastic soil.Keywords: valorization, recycling, soil mixture, mechanical tests
Procedia PDF Downloads 1004943 Metabolic Predictive Model for PMV Control Based on Deep Learning
Authors: Eunji Choi, Borang Park, Youngjae Choi, Jinwoo Moon
Abstract:
In this study, a predictive model for estimating the metabolism (MET) of human body was developed for the optimal control of indoor thermal environment. Human body images for indoor activities and human body joint coordinated values were collected as data sets, which are used in predictive model. A deep learning algorithm was used in an initial model, and its number of hidden layers and hidden neurons were optimized. Lastly, the model prediction performance was analyzed after the model being trained through collected data. In conclusion, the possibility of MET prediction was confirmed, and the direction of the future study was proposed as developing various data and the predictive model.Keywords: deep learning, indoor quality, metabolism, predictive model
Procedia PDF Downloads 2554942 Monitoring Soil Moisture Dynamic in Root Zone System of Argania spinosa Using Electrical Resistivity Imaging
Authors: F. Ainlhout, S. Boutaleb, M. C. Diaz-Barradas, M. Zunzunegui
Abstract:
Argania spinosa is an endemic tree of the southwest of Morocco, occupying 828,000 Ha, distributed mainly between Mediterranean vegetation and the desert. This tree can grow in extremely arid regions in Morocco, where annual rainfall ranges between 100-300 mm where no other tree species can live. It has been designated as a UNESCO Biosphere reserve since 1998. Argania tree is of great importance in human and animal feeding of rural population as well as for oil production, it is considered as a multi-usage tree. Admine forest located in the suburbs of Agadir city, 5 km inland, was selected to conduct this work. The aim of the study was to investigate the temporal variation in root-zone moisture dynamic in response to variation in climatic conditions and vegetation water uptake, using a geophysical technique called Electrical resistivity imaging (ERI). This technique discriminates resistive woody roots, dry and moisture soil. Time-dependent measurements (from April till July) of resistivity sections were performed along the surface transect (94 m Length) at 2 m fixed electrode spacing. Transect included eight Argan trees. The interactions between the tree and soil moisture were estimated by following the tree water status variations accompanying the soil moisture deficit. For that purpose we measured midday leaf water potential and relative water content during each sampling day, and for the eight trees. The first results showed that ERI can be used to accurately quantify the spatiotemporal distribution of root-zone moisture content and woody root. The section obtained shows three different layers: middle conductive one (moistured); a moderately resistive layer corresponding to relatively dry soil (calcareous formation with intercalation of marly strata) on top, this layer is interspersed by very resistant layer corresponding to woody roots. Below the conductive layer, we find the moderately resistive layer. We note that throughout the experiment, there was a continuous decrease in soil moisture at the different layers. With the ERI, we can clearly estimate the depth of the woody roots, which does not exceed 4 meters. In previous work on the same species, analyzing the δ18O in water of xylem and in the range of possible water sources, we argued that rain is the main water source in winter and spring, but not in summer, trees are not exploiting deep water from the aquifer as the popular assessment, instead of this they are using soil water at few meter depth. The results of the present work confirm the idea that the roots of Argania spinosa are not growing very deep.Keywords: Argania spinosa, electrical resistivity imaging, root system, soil moisture
Procedia PDF Downloads 3264941 Pinch Technology for Minimization of Water Consumption at a Refinery
Authors: W. Mughees, M. Alahmad
Abstract:
Water is the most significant entity that controls local and global development. For the Gulf region, especially Saudi Arabia, with its limited potable water resources, the potential of the fresh water problem is highly considerable. In this research, the study involves the design and analysis of pinch-based water/wastewater networks. Multiple water/wastewater networks were developed using pinch analysis involving direct recycle/material recycle method. Property-integration technique was adopted to carry out direct recycle method. Particularly, a petroleum refinery was considered as a case study. In direct recycle methodology, minimum water discharge and minimum fresh water resource targets were estimated. Re-design (or retrofitting) of water allocation in the networks was undertaken. Chemical Oxygen Demand (COD) and hardness properties were taken as pollutants. This research was based on single and double contaminant approach for COD and hardness and the amount of fresh water was reduced from 340.0 m3/h to 149.0 m3/h (43.8%), 208.0 m3/h (61.18%) respectively. While regarding double contaminant approach, reduction in fresh water demand was 132.0 m3/h (38.8%). The required analysis was also carried out using mathematical programming technique. Operating software such as LINGO was used for these studies which have verified the graphical method results in a valuable and accurate way. Among the multiple water networks, the one possible water allocation network was developed based on mass exchange.Keywords: minimization, water pinch, water management, pollution prevention
Procedia PDF Downloads 476